var/home/core/zuul-output/0000755000175000017500000000000015137342006014526 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137346420015476 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000263256715137346342020303 0ustar corecore}ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD .d,g)/&]low|_v-VWYw?.y7O}zi/ixK|1Ool_~yyiw|zxV^֯Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jw *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/P_]F@?qr7@sON_}ۿ릶ytoyמseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzCwS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזLwfm#Y~!%rpWMEWMjbn(ek~iQ)à[h2yrO.|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI}0BOnYr猸p$nu̿ݣ\)#s{p'ɂN$r;fVkvo\mkmB`s ~7!GdјCyEߖs|n|zu0VhI|{}BC6q>HĜ]Xgy G[Ŷ.|37xo=N4wjDH>:&EOΆ<䧊1v@b&툒f!y̯RE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C39k-{p?րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(T'qp^X7c&͌ӒҶW r@/m@6P!{`ͱ)m`6*G-1F 6=X#leU d6xTV6 gn&i"@*"mr栣 IEVpq 0sy OM*@ >n) u{Hk|v;tCl2m s]-$( Pnuݮ)`Q6eMӁKzFZf;5IW1i[xU 0FPM]gl}>6sUDO5f;A@z>T+DE 6Хm<쉶K`ͯ% 0OFjTkPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6u.Ѫv¼%끖[7ɰ5 [jHhaϯ/lX/bjFO.= w ?>ȑ3n?z,t s5Z/ Clo-` z?a~b mzkC zF/}b&x Uhm.O 4m6^^osVЦ+*@5Fˢg'!>$]0 1_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3n/i{ 6uwŇctyX{>GXg&[ņzP8/f<8sl, 0۬Z"X.~`٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@Nbe7}v+7Zo>W?%TbzK-6cb:XeG?hl&0Ɠbb_2++oI~!&-[TWvxZ>4(sgz1v&YN2姟d4"?oWNW݃yh~%DTt^W7q.@ L⃳662G,:* $: e~7[/P%F on~%dƹɥO"dޢt|BpYqc@P`ڄj҆anCѢMU sf`Y*_h/{͊0:L5:)u3wI$G}qsd*꓎0]TGF[vJ;JٵK+Woc.F3 %N%FF"HH"\$ۤ寚;}2Q14S`XPL`-$G{3TUFp/:4TƳ5[۲yzlW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?x]>uxZ`v?7"I8hp A&?a(8E-DHa%LMg2:-ŷX(ǒ>,ݵ𴛾é5Zٵ]z"]òI3Z=pQVΖj?+ױV}F#N9ǤWQÝiVDc5M!j1ڶù/Ok Ƴ=x?ZI,e,X Q"SoUG6 !ȴ ,!NB e^54Sg(s$sN' 88`wC3TE+A\.ԍל9 y{͝BxG&JS meT;{З>'[LR"w F05N<  yH QveJ=WhwS]֫l"]Јzg6eze;\Mdv!E]?CLC4ʍ@1Ssc;l?ߨG~oB(ъ{zZJ }z&OF wkߓG9!1u8^drKcJBxF&+62,b.-Z*qqdX>$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ GnjMU.APf\M*t*vg]xo{:l[n=`smFQµtxd _d2J0BLzv8D<%P\MUfN$68X8ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:= 5k6JvUL*t*\!j=Ã˼)"޶*̈́\)F^jH Gr%ie A>;^u'}8H0]+ES,n?UU{ x~ʓOy_>??~I&9s$ $"+ 쩹& h'|?1 ؚ~1%dk􂗡Ƭd 8AIڲhn?le\ZO1O`E;\9n@VFB0℃= OWv/XV-/p:MP\<=<^越a/bz?ܓvjIg3MN4:]U]STa,@OKdޕ6ql?N/e1N2i#stV \'xMgaSZNg8>e!^f%cYr]qs:"̊;isXa]d+"v=x7p.fZCg_Ys;pE&\U}ܫSh])qKYAـhhdEnU14&G * QIQs;rԩ.k83֖8Muqu_48dHܥlWW q>fu6+'}xu\Veelz`Zbym gp8펠ˋֆ:1IC8qٞ\vXçL ]X/r}7O}Wh,h ;RQ=]u00yiC۔I^3!?H6iUH:ô 4P$rT`%2Aq-֢׍qt=@x#~0)p# ы9'iri]ͪ/@繁qVGCڤr,DihB ,m 9 _$q3= A$IC"6g^4e`Xo(D*6"^eTh'4xpFڜe'fVQ7~'c L^ԯwIڣA.}H;Ë*׬=`^ 9]r鐃 -Dfi2|QwZk‹u^6DQ1&H凎c!n[mi3)WfsF:M"uҷs.1!뾧1%s,hQs|hx̗3%*v9(I;:'>uQ+v)vR/egBhAAdh]4H:nV$tHI98/)=mͭ ڐn}}~ק?g_6WĩDRc0]rY9'z .(jHI :{HG}HDN`h7@{jnE#[dz;n#y 9D*A$$"^)dVQ.(rO6ӟZw_Ȣaޒu'- ^_,G;U\cAAz7EtlLuoXuA}bT2H_*kIG?S(קjhg 5EF5uKkBYx-qCfqsn[?_r=V:х@mfVg,w}QJUtesYyt7Yr+"*DtO/o۷~|hw^5wE of7cꃱ.)7.u/}tPTGc 5tW> l/`I~>|灹mQ$>N |gZ ͜IH[RNOMTq~g d0/0Љ!yB.hH׽;}VLGp3I#8'xal&Ȑc$ d7?K6xAH1H#:f _tŒ^ hgiNas*@K{7tH*t쬆Ny497ͩ KVsVokwW&4*H'\ d$]Vmr달v9dB.bq:__xW|1=6 R3y^ E#LB ZaZd1,]ןkznxtK|v+`VZ3JϧC^|/{ś}r3 >6׳oƄ%VDSWn 0,qh! E-Z%ܹpU:&&fX+EǬ.ťqpNZܗÅxjsD|[,_4EqgMƒK6f/FXJRF>i XʽAQGwG%mgo 恤hˍJ_SgskwI\t`ﶘ080ƱQŀllKX@116fqo>NrU Ѣ9*|ãeeH7.z!<7zG4p9tV|̢T`˖E ;;,tTaIUle*$!>*mBA2,gJIn_kSz)JC]?X(OPJS3.}clݨ{e!MB,cB߮4af祋,1/_xq=fBRO0P'֫-kbM6Apw,GO2}MGK'#+սE^dˋf6Y bQEz}eҏnr_ ^O^W zw~Ȳ=sXअy{E|04SRm+0^PTi-"] O('@BKD6 {NmʐzRj.aQcb^CZ-uvpr CѐٱlGNzIveca=%1Qi F>wTLHUGӃ\sA֎Xpljlv ^tSȻ \cPwίwX"{>9V0ټ_`#U8VdTtD_GU9V ұ{q:ObUi7s )B ۊZlzIA4S#x,T3ѱ ԶJ=rs>Nb: Q6ˌ߉J%.Dl2ȱ%ܱ&6XƟ6qg(USok+Po$lwvmi8W_VT18V =| ub6QWCnY'"*aN08wuSEAVخ m3 o\` sHc# fqT .,ŀU|⦍߶/*~48âF,#[:y_YIpʼn)dk!J'Z5=r&; (y*b*O_ULT.ÔD[%s1,jЅ@k0Ցu֯dtKl$Y5O*GUڇvI`b0ο0~oI`b#FOf_$0!i rS/wvҍ%Eb/Ec|U9F-)L)ŘF`U:VK jeFrԋ7EDYpԽ.D\dNyj荊EEg]bÔF˩ք%EGƶ*NX)Hc(<|q@Oޯr^3>Uf1w;mCja:-1_k٘%VbZ˙#G6 `q+MPU~l!.?I_Pĝ"] rT [eTr؟˰ ]\ h! v˱>5S1px fnk}sRmA>d2UAkؖvlX܇Bz1U_#Xӫ+al H d\k/I,k,ρ|`zR/$@8VU^rcG"E7\qtS:ڝUyy >Vc11*?xYa8U`Jw/AcL~|;yj8TR#s"Q.ϊ/Yrx+u6*27fǪC%+A~*Zآ'ѭnۡ|< a1s\ T5҃FZh?EV"sd!@БU ^p%pO3|B5=2怕nwRqR9~ i±za+HFNi>. EWz:V^&YEs5Ȭ N *7{!fRБBSۘ† Er/IGU}APQT]|XN X]FbKjKdO U6[3TTX)|*H'2U0:VunBl  `5/@ա06VNO8VGON@KgjyK?Wq1egI+ I.*F~L!Gf"LD&U 6tGd#fR*c ^tSLjnKS9 Ȼ \ >lr&}+̼d"I va,Jm_u)d靕َ| Vw85F3Liƙb<;dM-})C?Fw*IJ_3UG'+¨[9| >80\+ xJpΕ`p~mg˗%F Rg(6=/r+%a>w Ohght uЍaRs ^d6GXAf?V_mW puȇ S:tŴvŀU#-*mZ5k5r)_x*8ͼx@(k:_TX%[paRu~}#Ѥr %A%`;MxB[CzR怕#H% }8@*AM.SEhd,rKrʇ)br\+! s1CtӒNc_:F*`Nv;ogQFa2V%ZniE|nZ&-I,t*ώlo Lhnٓ'Xm R ˍ-~ά}hs\5TT%~am.>!LcoJrKmqvez܅E9t6FZXgsreHhlٷ+ [}r:̓?W~e6>0E8`Jq-(ed;W¨:Ä&]䒿e;0:|$Ȃ1L-%;Ƅ{dɱL;V[bp>!n&աIJX1$9;[?- й vRCxKVV+#lj@_RL;IQ8ŢΌXD@Z< (1ZRÜ:OUM/vư{'jYXE4S/8 7: `/ +G\ U>]B2/n2=8) B gJ3bcKo̹ʇ\B~Is 2sO/I!}xV&\b<9$4Nve^آ]$LGF@LjKٕyzH 31Հm-XıUXF|\A-2) ' RG6h?āUŔyj[j_ӂ~ яA弆^bDyzǖQ8`jXbsK?l58,?YP5䜭ve9YFznTEf3Ja\,@2,?WYؾNr<V` =V[oB5!Z\ļǪЎr8@*ucѡv\[|s L-+y{5K@dzp`r"mũɸHNd"yc Pu>x2;W`_VR<aӗ&D<=h-Rר|/r _ǖھcߖ]G@Ն;UQG1 '3Jە Q88ASUȿ!:WѥLf21;d9OU᧯MR3V:<}xXh//T+coY5Ȧ4/m0NE(G2[+G~H'5ipӘ͏O +Px SPp.,?Uv|][oȒ+_%ؼ3uLq;>v005oEoIɲce0Vrp&ŮuRHbo$2т&NGIXUgl)}UL6K@ ,UIe;<\ U*D)E\i,2Si[Hc_K"ozKLl8`/GcӢEZDǙ~ اE|;GI~2>))6))[1:))[armt//;Be22CӖfnd}Y\9"z?7W'oy~/+fҊL `~460ߵCS8j.q#aE<hҚar0-q㺽hE^3Ows ]|ٶ? +1|?/GN ݰ/t°.tB/L{j̳^;/<*ӰZP ~c 0]5_B!tߟ4y\[w՗d0!޾? vmH//ro1Pj!C%[]x)2~wB;"+Kxߤ.os,ZQ=s"#LPp0)y?@Ky DD [.x'*|1r%xx%ZWz+o&Q^NRx8V ;L-6醔:DSa QIIXg]w/lUQ=~8IPL% Onĝw?Y45@*_r v|C¹zBaK+9??ex[w?<)V<A~=m,'gl675 φ?tWߓ}dq %n?A#z&W"I;S궞w&Mk]$9j8hߓ&Ýy} ~5 Y&mʊa;c9^?w 3s9Oqt{M⚻˅gM׭= "?BvP0$(29ɪxq1Jr)oP2+  ?2o`kgg(ݮwtvrsד]&>N|Cñ@p0$! O8'1P]7:GwL$oq)X'g(`A1ngu7b eL=Ûџ4P z:>?,M69jlD:lZH25'wWE]e.1ljeS9㣴mrAS(@: N'gDnxCV[.x1NZȌ.p8&3Pi2D@mvrq.æOHqUQ@E-Y  Kd/t AjJhaDӰ $?" {ۇ'ƅ@; Q ʕZUWyı筇D_ϮmE~:RPjm-hv})+mCK* HX &^ @i@5%1tm-zv`TUI53M)Yw3Sm vyt=feUP)Z`"/7!Obk䱽ΝlZi-gy)R5tBczHX"'}haU+^h([n쩶aQaze-ă ZT/Ч6S]/0m7@.FZL_X/x\ѾZWa%ŜF-$hA3ܶLjĽM ˣt-d2ح c~ 'dcfj=V|X}+VVlp}}7FE  -uoP[V/`.h%$CW'j|bQ5˪+jL9ĤTV7VNjx7- V`6 ̖n|o[P[5%jm=XMij*@i߱ &j]ʳXKmu"F<4ٝU96Tn{LEh`ZSǃ+UFIS-6} 5źvRFl]d'L])W`1H4PՔky]3<8φrwY",W@A Ʊ",.߾jm;ET`)"Ep$u gq$ZkʄfL2"Xۺ͐mGMgKN hDSM +@QIJRm6ZeGǶ/A4=bfX3ŠP؇1ei.Kzhd^jSxSf8SƈD-+</<{=jV W}-xz|>E,Ee [hUg5$b&SڤrӵdTQTBhp`|'TlX+ cA*|{ߴR4s9B[ZS]xX.Zpv^mH~q"[3-;u`AEqYa`rN\&5ݵ7A/ 5![k,ܝP[f%9`R}UdXpAjhu%.3@UiR_jl6.{,x6"Oq9D1T95*\H$!:]mIwc\{w0謭Ё%9[R4bUh*-u=V6jJ}{|*H㥪t'ëK]EcUǶzio@^XMgo)igY7eZAX*]HՃw/ 04Uo%0Oc1!R,cl*`hdĝچW2*g|3 YDpxQ {"g3z{ i= @*Sh`g TG?ōP|>U;&uQ ]Bvۮ7vKQojVMX^aɠ mzz^U_q37\ @e= Z~y?\m6Ta.9揞1\g,lcܑrqO d XbHfwsgl0na \' |h)`G2WmވF>$vTGm5.mjXCھNvZ䴝F2S}k$CA? ttf WJ 1X]w #j[l|»3Ûd(34Hأ;q:ݥ>+?vDr`j&"Qk"ujDn]VYI'PgoO|ߜ8 g(:urI?γ4W40Ow?џY S.SwOrEl_M'*X˞UYI]Y :9TlQ:tPWLnO(:ZE!LzIk/kȞu.D j   ?^ciR,û= ɇSǓ7?(7pN<4b {"?|1w!=xyPhd}~|+9\ {x X !\Ex}@x^xfz&ӯ];@an" C{?Xˤ oݠ{s ~ SWm$"쑱w;˻\<V (8G|/a|Q{ޥc{c>Y4x.nAɂT0я~Wʭq;WT@%DIe +(2u`/b3!(U`JFE_ݪCo[y=,)LBH 6̎sS3`l PG)q~D/ c~=up/ K:ҿ\r ˹ׇ ě+;83q|V=c AV980(n~92ap\nF"hw#xy;0ы !tτ{ K ߆,ēsgaչ3p`ӓL `qwk0,EW1@4~FflwX?ľ9{}.^>I 3^1+ Ma0?ߍ''3MQ,9@A/BX'2t-Lܤ; eIgH~<؃-ܹ㮳[Mtdݽ2 B꾕[hpu`^I0'gqf&_,% ;v2s[`Uy˽tfsT`wR} uG@ pA39ԥ}IPݤ[}(?gCo‘j* xOU8@7tCu(ےfu0aHYAWzzLؙ$2\Ҵ#@ eY"l,A$ʉ"jஐȬ`2y&ήd5ΥPs^“K dd%ŔXëNѰN;ɛDrNy6p}*S~,:i2sj꜅+A ̪A^HVl'TD‰h@IrՊ4 (:fADy mm.p-[g\8+n˨I:oΓz?ȸἍa &'ا`߀(?+-Md{Vb{$ E0[$raqmq>@kk뷪9lv#)feg=jҼ!__._3YPgF߱溸9I٬ Nq(F8J>YżCf0Buyy%\Ǫ%H  8.FΨE *K-2).{+*>}yn<l-N#R;ibv5δ@?~Vg.~g\L(I*-Mt`z娹s q\JʪgUL :Y0+%E \yrvvt^vxͽԱGnپ(2G0ͮWie6 l<)U'߷CfU6Q;t!`kq|#p!FMsp1+yk4jBeD4Ob:WdT#'i Xo %ƺУܦf7כ)o޴rfz}xp1P~[E9d l4zuQ zSN.}Fօ)O.&& &pYma#[Є?*1Z;Z3PwB:|BI(߁PN؁P;iG*v ['ہPowB=PoBuBw'# w 4X'4؁`wB<`BuB w'4|# K TB_Z2+gDB{Mly\mytx*z0L\UQ]u, \x8w ;Cx$WhJZHcyvH4 ޔMŀ8lM{J-| Ae^e|̇SY09([F?aSAbPObэ''&W|'VM.8EHXj]@,觅E?Ϫ,şe^e"דfPn98c 9~X-Ccj%OudxzKxg#ZC}=[S[#MH1w,(xH,,&W ߠoE^Y$_ `PM$&0ZZ,hqǡD>F>3@^r~j/9[`hu8|ŔCSEuuЕu۸ >N!5 te7L5kÃ|-l'+Lj9x>[ 2ZVy҈* lD修d'Rt!"Ϗ NLٛ֕>@kO=AeiS{@#w ޝo[i1Aa7ߍS]R=rz]Fd8 Rm&o?[^8|zw;BVo*#V=_IdJ\D{VRv lಁX0iA*NkaJ5˧w3 $Nn/{:& X:BY p'1rIKu/ *$mY?Ȭ_?HtW '38$,?򮵷$|V~<@?e(&iɣT_W 0݃K={N=;?q6W?F>>~S?᏿ZЏIX-rt:G|6Фy_q.f>vI6~5vI q^5uWwmioOp<0oP;3\/O#gXMo#u:% 65OKiAÖ#zN=J\}\wf"N9vv&i::o?CQCЂ ^&HlV 5'BX?M"F3_`Y2J\@XfEAL8{{i1h;:c ƺA*uGXY( m I~ZMP"m\ z60Xm҂AJFCA*5!",]f(vJSLBUB7xlҁ6T˕U Ei)<%fko^X/Oo0<[2CeD5}R#4KBtaIzh*ABiFN 4Ys*I8g nyԵ x 2ؤI09^þ+;`ؙ+:CėPX-7ʀM@^N3*Ib|Ajb*@2x$D 4S$]\.80EDA[D `:hRp)b`sXpyHww )) JAꃀ<7*mXrC$yYV[r>Z/f!tfE J&E uNY3H6`M7]p<+~cbcNNM mBr6qVBv.);9C9vڹhMR4ϥN E |KsXv͢ ȉ BbqY%@FnehPbUoIs-Raιv(`L8# Njޠɴ%QߗJ&fRPT1AbPD?_X19&Sb1c+'D yYLjTF XQ$gz]TmŔ>8kV7mqQM1 90E!׬M0Q9b"s^ l+5g_DF\J11\a:x]rVvF?UeXoWȹ +~jO!>a9(=/DU4dGUlzTغYŖ3)>6 ]5P g|JLHj p҅Hm4~p-ORA3SlWnt њ${fWwifۉۼ>qbt>y Vd ἐ>PߗmcIh)fpƨ) ̨7. Hba%Gn%1KXh)`}Tѳh2HO4=:PskA)E}dTffQ J=E lBz4؜ݗónU&%3;uؑ|Dz.Ү3s*]r ZCZ@r]0)xI3If0a#-dETE\ 9&i֗" ~K (8%(B~Y5 [ >BbGC`KTdXdEf^d~V]xלk5ɭG&V(6??.8:u'T|?ixy;$JX._s` _ :~c7a>R.Xn)R*^B-a9>U @Ի,\]3eK6:ch "鲵osh(YT Cn3*b$(H`8uTv鯥wAҷ KUg**/<.-YG"zj"d]P8ڤvpܵ]PofulS`U*B_`kIAR-  :9" ž.VFmLym() K6]p|VMVHh7\n}iӧn .n^6O +FV#^u+Er7@si7: MOaaYY~_v~c|UYkBuI]Ox(fG|-Y1!_w5'z9T^ĽqZz2ZމGn~PDZ ozktևEqtRrPRexfC,ȂIZFRe$XO MG m CI^o&L ƺnDs%ZZ) m # Zhx-]#*:M. ldWҖDu/2K]ph*r$hkGd4@Ja*!WPsJYxrK{yvq =uYt~4[}B*"88!z`S2ՠ>fߕ<﫲Qd"&G8x4rhvIK6YM%/4(kuP_TSn|MعRiS]dZ9-ܤmZr=x!h!tf 3)QI"&H7]˚Ha,5S "q$ ߟ,4 ~zPJ̔ws=nh?]B:*k_҅)w?8xcOUS/--|~LJuq0~VDC>x6l1=ܠWcUnYmsU*PK/ Sն ^rSKogTQ31|ic&%[ܯ|(]<6%2vQzp;AԳzJ=Oʦ3G󺘂0%bv3,zpFbߟF.8lH?BbS=Rע >CIhpeBF|WLԒq:V{3jwI6v .;02Iu`4a"KZQ~$mQ=L=H#dw:%v=5=lZ=In {Ηs-3ux4gvGqѱFudU# wJԊ$Xq?.p?mn'c?|'Y={*)L\+}̝4*F(Jxg ^;Oq'8y?Dz3*Yth*w"%ϷoGl}\uv>Lnw?DӪMr}K+;DShpZamrM^(V ?WU{5< 2o8? &y[`@VH] w$O~*_My/h?LFy4`'`š hFa|R V ؎Ͽg7#tl@zyL(} hC4୧&4w|h D/k-MK6l&pBb_4&\%ZO"<3J=zha.B4 5O,Q]r7,Լd:gy]Vl+|*-&W=E;}Va<)Mh]2ৣ%@Ъ,mY (Kqo5WE ;co\Ĕhc4.#0ewڥ9v[ju:ZeA:m 2Rp;M9yW]|#y۷%5^_H[&gn L79-@GwprF̭"όrl. rDž@f(c2b˻9Z$" -ϳwK@}`%_WlW7 _$Nչ+ *xj,Og\0onL%JDQh]F،Q0|QR@_/Ç?#: SqƧ|}oρ?S.,=wc5Lܻ.82]85wP,إcͲӲpCܻPwM[#gqq"g@3HnGX0`570* ITI.M:-F,tR$|)xˈ) %d]7R]^2:%-$SCBi!R]`ZU;vbM˦ћtB<+;%%CRBa!B=r K `o]a5܍ثm:(?vITNQv'%7 ; .qB]aL:41lB+ j%]Zk'(w0] xZ)7Z~c;_\&jB?#q:2 tn)d7ԏj7,I(FrJSh0>c4Iߛ',&~?RfGl@iH90X8ģ w1r8a7˧,>CA֋BPK;W,27"exӘg|_PaQi,xq| ״Ճ+;8&$иz4VH-Sm08eV+%o {qt[_J|xW~`,1wXJp<94wH)ʭx;|?q$ֶW2wnDcx-Kņ+{(.g}N@;gtDa[AϚ,jjA@e\#k-u"F'`/%#ԆhU돋]=h">:+Eã> иb8Ϛl1ƚԳ=6;jHDxYQ,l{,`_rQɊ}AJ<u,|2~V-K-?'VbG6ʡ5Xh|jt ڱFv[0íՏ#eLI]ھ Vl_˘q4X˪:x|q*R.8)"q@ߟ_tZs)X5=/Ub/|7,} .<8FjGF8"jD^j4?]GhծLd|7 Z3/1JlΟ8"h=W~PvDxDomN>/X^- z,N ̺r[IhD[Ʒ ka[Ǹ_dx="⃏ PmyHpKfWPWYWVitu^؈P;/n}pkY wey1`ێbJe5Պt)>,s^$WuDEV#݊b3rՁOm߈Zwy0WWJvo}=)e y6\ \vۏgOg+I5 1W7] R"CW`͠*z2w3o>;@li͂lq5;W׀] Ŏq?0K/hR&/srԟ^eEuנZ~@g~"MS:B֘ 9Rpċ 󙎋i-5f/k~`}̧TqĮ1T$8F䘰Foq}p$g1gpycSgO?9d٢N.=Q~WRƊP؅i@GEw|sJ},n[/p}P)\P^ϙoUv7)[oEy W*kH(ry4W-/fFhbؖr䅅y"M՛`O :yԢ㒏=itߟ`p^> m#`Zľz8\b FJQfo|#;5{\cc;u$(;σܽ00*FЯ BU>KAlM`FlB&B|uf#^ٌ\r7ΰ5Q$|3 IZ;\hU=P* ^S!jbcFp{J.:~+e5-/1I]IoWA&ek y ]'b#^RZ־.ۯa pL*"kty&Wˢhòd*%}4{s.%$hQh@J:+a]l3ܩe\JS,p\nL]Kn.vf`lm/ؙ `r߿=[=Z&z)7Q/52#v\*ss%v&dDYńQNc4T0LS"$vmH%tT[ J-guib^7WZK(]@0C@.Lvw}U-uK*9CL024(%hGM@#N aa#8uܙ&R@EOVa,E1*7i㊥~P $\nQzF w`x4r1.u D QlP1#ciaQ 1 e&VN#[>z.[$4p%hb)|@II- TZAmвU 3 P4e`J[t@m T(g a*2Mq# Ag `L54%HFMS, 1h F+v\za6ikAx*:J>z#5>tthCrUk%jm P6` c56 Q,(R@1!G~0IB$(^)'T=;@exʰ !Px&HJX*R(fJ!%Ԥ5Q&-kikѠ>%M:o C |)FMpk^VNy#ֈMk_d ֢7#Xh_ WѮh_87%{޶$ a܇](!\n.`f$. >.%'`VDْC)z>x"]]x7L Z"ôXo^쨆Z_43%W"Ma)$}l Pf&wak9>_!s鰴 Ery)[Sdhr_ XȯSMH!7Lƥ κH1$v,tR0:ΪHD2B6P+k2Ux_v#Mg{l=vGi:\ vGi-谞 M:1 Th@(b@ 0/ A&1,HhX86=olxĈ0q(f~$ 'H5.EJ*J, .tTƠZ$ MmBhJʜ80IHC͑%8+pbZr-3}8=8BX!=`Z/ B"KAIH$'R:Ai01؁$Q6JJIa}F_ثvBү‶I%&+6柮Ơ$ 3"#jf'V&\Ԉ DZDXrA eܼG f? e/ Aq}N:>SׄVݺlC^=9?;P-μgg &󋛃h1JдF$3>4=Fѝk VD8iZ rwf,0vBS#Ý-!u6dQå!J@Dw|FJT%T(Ŭ33D J Ys&R.c@$a!OTX:* pQHHB 86B ft[5!%vm+'yaypu]AM/ ΆlxpWt%HX/\w*= @)JlOj^yeR %BܟmX6vgcZq-+?- @0P:w|~ܢHIF`R.&8==my=lzk3p K mi|n@rz)~qQO`xu(igBG$%1|\kAXXश""'kQ+ /9}93Yhۼ:WaV"mD F\.zôm]}lMPP^^UĸlOh%wl\8" ܝp}Q:  }x}3 ׍` Jj=}&-s# : \YH =K q5OIp]ƈA~T ;SqxNQ}za{X68<5y„/][lb~@"jـ*S.X}pm ݷȹ ŹC_/~ Xfٓמ44yfd^O ɑ/ &[.b/ |׼G׳sTubTNYDATS=hAHCTVhNo%]-6y`3;?!c[KkBKAyq؃'#FXNݝi|K)ߡZ+-uT9L0s `;(wkTw.mD;a{#mnEޔՄ<]L3T4n&8Нoh0_3p $|3Q;fp&MvVzfs(:7s}fK&{o\OV߼`}hʆC&<.bQϏGDK6&,jɷ>j<(ӹ2i0];/.7VY"Y)S!(+OwOMQ9 Di}/Vmōr0aTH ZF VEhC<|#X:yӆZ'ӆ"z#`.v%z 09 NFflSܧ^$}ZJZ+ -~O x5|>YA~Y4df=ցb)ȶwO(5.z?ӝf2 hKT$?LY6m@ּ9.6T4؟Jʦ~cjN{^5y6Dü/vGۑ$EjfZ( 5bzKs;a/N8S7g 9f#7YuPZYX;lfsD+et:{O#|  lohwE'\;+1#"K";˅X /c#B3|hWi|WS3%V!mh}G{׎ >&J0]f9\Ăbh,rGȍ.L5t (i qS& ^ǼGiνEr<}.-b]dyP<$Y`_ggj zuZT7c6?qVJ*L߀X?OE,wG%ev<i$n9PS@+E8}4ޟxHG2{2]}?GyLNǓ,~RI?2Mkg"t/Z͘]Kl9!NR{=S20W (8ל~MoSs1{,jOX h4_B契G/8_%_U_ v2^8U9mn_ =l\P :.sBf.ه@A6K?i3*Mb,gKJ<Ѐ;_q!Lkւ+k>Ȱ敷LVt*@LgQ &ÿNVtfWy*wpbNߑ=vhLv氈c>[DE,JЋz{/2/MZ-b6XfwPfmgD%U( 2ne!dʑ d],` J\n|>}[&>OI6B#JHw*NaѪ{!wa2kۮ3El/wP)ߛlo4\Qtng݋tY4ՕiZgJbr57(%I#tޤqF/SJ7`V-h5 'b9e>Tu_ٗ3a,Kdq>FB$?[>i *ʆOLZKvJH$is7ŖE[_s3¹֫oַ 9L\T^ 4tmhϓOm\W8r1Orf$0ݯS/ q~[ۊKa- ׮H SwGq?j0S{XRA CD0wK˾jeQ@ajhm ;'Y+x)ESfi:?aܱz5#51wf@Mf[il ZIBү_qv\C(wr/+m;Vp\ LO`KrpOU[;>US*(S'ˉ_^279q Ÿ+cSspVIoEhd`wp3"mN/m_0٘Jeå4W %;`MOj"ñb4v os_tH?+(-\EƗU~[72?sEG]ʿc0;h/G AiDDK#4  ƊǑb+CcǂЄ+|r`^X3HStܷ1`\c!P!s';Ze:Y)f1襺ͯ_.?~,53N捁:(|ߡ,\1Sp6Q۫ |Vj?7OJzc ˺PW{s/p=pٰxwxѦ? wH_ z;=IH J}>[WfyH&Ǿ ήa}xpmV}P۳pl/-Î0-s34\@o*8u8||(gS%T2l"JK"(c53S2hsֈ/`eBR3F<㘥}OyUNA֧7ik% V[3/H8SV2D"(/pKk*3\I,fq]5+ +j%fGXmJ- J$HFőwUҽ)w3/o)%[5c 1b+疉iC*cx4T#r۟NK^q6r3D"+ Y6EC8ږ?m?HBכ/ *wc/]%$ u*܃{`ҩ*WYQ1pkH>aodFTRȔ "W 4ɑcx4T#rm|33<)4Mɒs:qTtaR1F0hoI@٪%4M$r޽V_6Z]iu ñl}^蛫]6s.`KjQEi1ڞ$l"KLrW7/F W@/f-}@\3wo׸1+~L+\Y8]x45oF}Z\o?_:j6%sG/c'F  0_1w]ʨ~?jg|1*b#Q8OŔ[&Gw43mƟq`Lq l> UFϭ1T[dlTP9wZX#% s* %Ȼ*ى!\FM g'=X#`{Ai!/,^АNGzYFŞÐwU=N*IrV"}N\a]y'H1<*ǑwU/F1m VɤH$ ~땔INxS;K t0$$  JcŒ:;.';lkˁxi9P]uh޷po9&I$-5V3$Rf,=kcx4T#FܾU(?EWQFh(Ȼ ?>,Ԭ«\J2 D Spn-s~"Mh"ai(qⷶ4ŽK~)+K: Y0@7W?OwdZm_W:OPN(MåtsL2KY("$Xz-e4ֶ\֣ȻVbɽ9xO@ \g-IV&Xu#MEU?mi5|WBNG UjLXq 0Qӟ/L#/R[z"ܕ=bmGQ^x; @Wt}9-9TI5NT}iQq=EUb8SvTS=]&`;51̸ TEF-̖@wU|Y <8r%5mw){j#<ߴ(zdI5>30N O-=OP(`>I`L#UIlTCx*R{gLO9Ad)pq2z;\FΉp$L3^+XfKZd"GADwUK7+!D2XJN3,*Lp:W+">D~I$BqFB@V2հ-VyC, fVYmlEUʎ)(+o$Q3pM`-}q`mhGU_t9f8ՌIƓhDC`S,/l}*@p\4'CM"TFT%%#y|9M͕;P*f, EUj8 P5-2F]äO$*5T7N餹a$R*C0MitKXA? wUjY_ vi1Y+_ inxr$هҌ1e%avL"Dq!SS̕j"ٍ"r)Gݨ׽#1^:v1_k4}I?jmK=݇ `B-2t TrTi,b!Z6s9o^g> G#;+,oN'8kgcj)aRT~!]37V`5SGȨ% QgmN:X#W2jѴ(*w=fgWyv;!w?MuyZl{ kEip`aTٶ\$֔J(L`%<f0Eօkt% 8NW_@]8Ǫ—31ay5R{G|8r&|FXR'K-K3yaZ˨wjN"S/]^+d:NxwvVxm1j"nGbky Vp{e{z=yh{5^c/F7􈮨#W(;xtxۗ:=b;C"n^_XvWȫE]x0ߵ{rEM r5=7c#-̀a} f"m+}kV 7IGz=y#zWQ-PY%6Zr0r C605wsvϊsnXSg%Ѫ?6n>`g[n=a$"n=3x 7x'n.$Ͽ)K"mC)xS[vgi!ib[m;tܩ~wmO `#"gSk]\mKWԛx)B6^!kWeq#XTEcGt2 eTw/i$ ï.q+}&N%S $^ףdqqB{^x)xi'Ol7'}Q˵4G4{4Sr]/K3e?Fi"{?/Qn*![}:*jĥ  Wc~|l$-2MͪM(*W=󺩪e}r/3[dJ9]Oד9rw?BF=p}{OJ.JxeH%嶗D1tZF `<EYnSI!( m C<C5.2bBz>u㸈1_zU !Qpl\.wקЇ(CȳQ9GC8*?FFiK1žQ/mﲙ rw;O)kʮg>5\[1\y[KRȫ }b6=P uD}֨FFkmh>zw{!nmpN% s}5|OW#lFCong <>ₓܚQ~Kb}FUJ fsEY6a]kHRF>\:rpm5?c"0]sE gl &{mnȎUmk^v-?*q_.+R>Kmk]U˲]ƈ[ MӢu*CI]qlG*Md~ jK~(EƈGAxfS!pԩOeOslqlꚂ =9PbeD%m**qK`tQ]1XH]Ĭ}#a%UA@0Ir[xUDe)LZ?H@M?uhxoPU~vfaG\֐B0gcYCxH!nWx+7 P Qgc3/ճ$U'XxX&8BBdwlHP ;bc3f~l'z@ֈ41w`;) *;gBuԃW?9/DY>-f=>8' O)MD!#LR>;B|U^;a74 kbq [EL7z<+^_Cs%ֆ昧V(Ts[\E$)g 5#A2Zg)D` `Ku7 yT 1vK]ƫl{3\O\å# P{oQqv؎P 6 /O8a-k^2||$͜ q-K vDN&+&ԶbnGYC% #`q^भ8wc no?>Smnjb{-m? ׃̃pB"ԑQ1VgcZM,< }ovcZ2+B(moSb4( @E0kz(#`FuG $LDhRӊO޵,7+/B4⹘Ĭg;{e-[.IeWibqVuL >c6nO]nBh75~>3K< èi,qy&"VkS:2dj䙲 Bzj?Ç4$|k!9/%:w3)@Vwxha\s6tM19&>3PM *{Q-f| ay\u?SW#g"C Y4411p6W> )\t/| Fb>w/mdu܄SX-3:XUG_k,ߥop\Ɋ 踘krH|ښ!T{n_($>ը1:ˀaZt& l@Gg:ӄ1y\!e.Qr'xpE~#8 5Y "jx1AN46 `ә^_yVR$G=.%({Ac!8RpS޴o_'w".4}uh@4V@8MG-De@8I3%"`SOM&dcf fN:h[Jt/HuEF\ȓ݀ F 0F#hr)Lzn uhj+cib9TolQmr x5zO4qd* b뚓)|ݜF=Gfwe A19j?a$gl\z"NoN>)6;˷l}\Hu&-n[=v28W@zFk Hˤ@浊?2-1lqܷ˥ׅG^ASM!`A9wrރ[!ϦFdjǹ)$PzL^z.a_o >j|אpR53֖LlnʘYw~~qVRCSMnyz uzD%H%3}T2>n؆=iБ.Pc4 ?j#"aV 9 ˼;hg%)]j0-T0cJrFx߀rӘIGθ6Ξ7X\Jԟh7]͎Wp~|Vĝj{Jxe}Qj˽"2:t7.=r!CMsܬ' Vm4ת}irpfQ^mF&\[e.v>*Su_܆|}'p^7}$Mw(t[I7FZ*-:Ko uꁌ!O0 g;x)>y8N Uʇ0 <ct &atǪ8.MLݧ7xhsȄUiS61v2h #)kD#NǓ= JБn[J4@cRLȉ:YH88j ,L',ud,pPq@GENxBޗH5en& Bx 1<-"<10ruZ fFF^Cz;nCC?8Z[88Dxȥ3Y>Lph-FvXhGkGOy.{T?2g}&%0p̻>AЇ GŰ,67\ӭǸ_Kx&ޓEPt8K6[/+bt \lWIegaӳywL$CcHdċ Hnsּ#{f>tr>>?]b-?y|u $Qםp^L> Lk׮9DmAƒ#U{k">8̓QKv*M:>{(}_}=NcjJ0ak׮QDqxAx0ѣ4}w4$|/xGȈm]"`#N7V*d܋ !WZZPB:NiJs{C4 5V+%xsYScI$9f9# yyyJэRڌOwvӭ6*VGS , '+ Q:j!$"悋 MxNo/pF&?!Z_z7]Kd&2m"g)D8hdZގ"UO7;EQ:M M^?e}||FR?N( BGjtDA5?mAZ:!uE`."$>@[z}{^w OBTHCU2DbO9^٦xg6B9sJ R:2bc#Ɲ2'8²Pe UXjV7Y_XTmu)G hzE\B d_ |!].@7 Iv,dG~蓝=:b]lsg3 U>V,Y?64~4T[%T#]mۢ* @-\sΕ?\qi8+d;\̩f2kK-nKN:.160F7Ɔħ|k:snrxJX5?cU1QdyL tf[*ɑ1:&ħ<\Plv,3_kI&=Ze 1l)t*I?Y^5E [<ݓ˵;0FjoǕ>@&51GOTm~kˊT5$| F{N=ϿوDCw{rgX13U_HL#i;4(~WeFӐT+˓ÏƬYgcr_8VwJyq)PAZ%4Uy9vW2B鑠3 +0n^)\T/-O`smmb>."0W7(<\L&CGȑqUD8+rɅRW).kOPr%8s*rU=۲"IBbzIIӁ˒+(fA^BJߐ^{A(}}0qR*gUeduB Bb,|)beU! _ի^ uɕ+!!WM~Up]B5'I3 }?^_9Xb]*nE/|:Un|XTڧ~)4(E~UO:WT\1|TYBf"uԅ/qFFo4K3n YHw M?%ŒS4<^W;"i/^_ǣ04Xi{xS[vSU^~B6E Iӛª~iri,qAM@UϨ8dE6 `mr ~-Qˮ^vtX£1t0*uڍ{ 2TrvdN4gb~u-2 a%pw&UD'lZt#K/A+ 3CL:DKkt8Y?wՖ7B2ᢸX3sr~=si[(`R^wJ&qn7[|_}='Y4ff+̛3i_c,"LH8aEo$'j%921"f5i,QdmpdNI:9X$E '"N̈́"Yz}ҿsg4eP11Y`S7Zޖt{1:=K%c:8RB|*4vFL/Wx*D*{5y v&>q9a#8AhD X$s3.l 6a@qL8iߡ^8kSwSgj3{خl Sp>$>ؖ{;r+AIB.Z% x[|?Iy\|=EN%2J?L*8d N0*̨t^JaUym{yZ݌ޚhfݍusyTAKxY'ߤ<+"zty•Eyޮ&z 2'[Qмm`c@D O49)2:(^e"'J,/e{|:/oFFoo4Ж' XFGBmu2uL;y_,&c0\V/'L`E&$>msPqOfu*n>Z.f6{ΨC -E=Ĵ!,lzd~:MU ,qMYzsP[>aY,N+4cR`3nv G TR!cU^,&H1-T`EeM탽m>8crL|aW28Yp-ljo8pǮmZ`ʰPVf>8 urFitSt*ⰟKDk1{]wXmZ_j0E尝|qUVY%rxsyPŰ9nao:m>tMw]e()g[%~v30uAJ*g=ydn+Ǻ_lvus:W me uI_IxX6of?>Uy}ϛݾy:8wN:T @s\2^;q}˾c!.FLsSW5/ 3Ӛe %Q|HxLNB<#s+z=EKY_se1W7<>e$>e!l@G'TٯMsq\ڹ؋W<"^ ]Nҥ|! ?8P\-~Vpp}9\Lj\ze_LusȺ?_?}J-(z۝nDAPg+c5wT.+el>].T` ƽg+ sy`8Rv5&kVm AU50]e> ?L @m4'\Ȫ`*%AjGdf5,_EQ5]N}r"a9w|-*0ZL,eyZf*eyŋL:T  EAFaJ< ª -k>lv=l(WMrW5yHnn:nw7ik^Y* OT sGpg7 Zu{:u.?qжMRg0w *}4qL \Q|GO߾O>|;Acڿཫ@_I7\˻VO}{σ+hw0jKݿknVvs0S.|T=޶mcMM`i)GF`pN>5r9 vRшT[S)^ҵ맕xNnVL UJ"1ZSP^A#9x40558BF(Y,,_i:w9LT_`=+4FYtCXOf"#in)8H rq`$R3y=Ad7ӺbE~#Vsח""|hK$rY1`zp8`1w7֌qIIΖCҶSV ݆:/+5KI]Laa,_O/&&GF`p#%8̩;ll~'6SMʌ8̱zV|)E]\EzdG+Y.tFQPVYOl A [Dq%JIm`3xy`8 (%RfDM &LR%+Jkf+(UH>/:fjB{>MU]+10vR/jC,\odUn,͈mFQSe9+F8Agq2ZFXp$yN<tjHpX9I3ŋiF NF`p!%7Tf30衹N^ӭ#y[T>ӏqdg*G"$I^Bp%Fu!uJ<$/%67:3SܬלWKjUq6Gѓ-igE^qO4P ݬ5q85 q/ˆu( 1'^<[d9ggR |yѠMʮZy;YLcI=&`,IdRbzyoФG:"$r#9:V\LGb#֦}1j gЫ[TbZRՃ&&beg-+,^rOՑ6ix^D͝Q#EF{*PHxދJWY0FKg`Xeҵ'3%5 U>G* NYL"]TQI"8fv`h]K'$0<2@ۋ eu !,n4Ti]aߟS/%F33a뫟qX I,#BтXwJ~v3V\q9WP3hŕJYsi8c?7B27֖FU$`ϦJ2e a-#086uʌsI=sʞCI<.BtRÑ"d{uF` O?/4e2ό?Y4Yƴ&l/, 'wUUR%l ~L'&!Si%fHeظ=2‚#<5x B: EQG!({x#2{XDDdqSɩchi]j(LL,%UJ-1uvUcDuaq,eFz P2q dFBGŒL L|\;"=]KFE%+q'Kϋɗ٬t,x.KKZ ju3j\0|2˖72 &p^+sW8)1p/H"GČrk7G]i]X0ᣊ p |Aќ10J*X@j0?Drb Oqc2/XFffjS\/wnsT0^(=NF8M|%15-MuYMV5&;|uX( ]uӾ0<C_8/#|| v5լxZKV,'}<#.gR*+KrRBGy n[<-_>uu~o0r\77lqR7dg^ƄaP]> Z]5Fޖ ˠ= ZKU]QfJt<)N6^#mOT^vSOk#u*slLsqZ~$wUtcHWrfRnbټO79ۛ_$ZVU/Ew p_N87aQGAzbl*qT=~q^ W×Yst"[8V_ۦly5\Ϗg`Od5\€X Q[Ŵ~_ר@R9ܠ.~b;VN!OZho١ߎaZ"I+NDJKb0Į,1|>P]5o[&38;?f9V <äq#4?0`2Ÿ䧿}<|wE%#ʯVAK")}XLk$h9z{?5K FL,xͰ륽Jt31vӏ1+f/V8 Dqg)`=\mSh܈JRR7]o;޽qn_kťc,t||Ho~VMMx &+N5{gչ߬gFvͰssnu jx}Tz1 gkV*FVX~cx~g?T)Y1. JHa$?̉zN irQZ[@1R.!J/v87?zqC->MˋܺMuSE$d@6󕲅]JvCdzy7xSX)I4'".z fuYp!49=6wL~<g9s%5<}erPO{ץf0:Hߏ;_QphSٚяMf,BV®i[ Qk݃^!^{1ձYDOzam'1vu1$S4eWEO jw")LTkr>#w=wkboع=(s11w'8-ɪrub)/':yCşₙxS3ZB)HYQiUL|Zz]뵖1ZF5N1FEEQ.lW L2]x1ƌGG-m*Y>9Zp,{ 奔ٷxG7p2&#LZD7qw|":0nGN;=ym·0ў1Fb@:۱XF5wAh,#FXwMe +QAtz.|fDحƽ pN[ e[ؘ0V-? ۍ6pH5/+m^ g_ :mnSxM` gȒ#YZ3$%Q2%k!إ*6Ir̙ Ϝqtq19OYd**1ZJξ?cD-0_Oxzed"afnD,O3IY|v=}˕'Og{1;O^!x[Cq (Wr[2H`4ˆ`0FHa":$'5ڈcMt.6<7.dI4յy!|}*O=d1Q~cr*0&0kGW 땵Tt~]VR^ Q2Bт0)O;"3F|TJ5C;7 d^ѱY'ɯP(b:O!h NłwdjG GvtKL50!Na6{8`N"ͿpQOAPnAծDMb&/=jBI\jvpթ= Dѓr(PʍxY8"*Yad@zs YUkvgc3Tus XwP[á-v+-[0a8Lۘͱz[A,#w vjYDj.g)OCe )Nۥ:[)Ѧ8w9@ztR/$r t̛r@mO=|pƬNWDNސcHnj}'m݁DcT>oBiY w,Zfub9J6vsPڪoe%IF:]l#|\;Bm(ަM޵^bv,NIwiY(u1)*BʽP#xic!: wGkFSHcLYS gS2wWZ!}2t.Չ÷ ձ4ne%1Kv-ޭ@H]YMFhXHhl)pGք'Ʝs+iI )F(Vp2f::rq>s×Eڬ]wgN>)o^,[5ut+זPc^,N`Yjũ L֎p6߱PS$ly?*γה\Q9M= Dɴ0;Zl)d' F(v *Zxpf@#>H <]SGQ_PWM0)Q24f0;޽nzX\ W}[iEg܎ [@#w2`zH @z-YHe!Y|"Xm{aչZxTv܁j]{wu4bCfۣ޳[^aOZK2-v-o=ޱ8^l_m|K&"C2u p38H)n[)F(k4\8xݙ[s9wA LT 5dņ=x1Ki&M2^G.ӵXd> $ٝ<`MTv,!Z7:UFg΅D xaz愥gY*7PXnޱ@Z{ df[m,~Oܹpx)fw-{v.ڏeFԹZ5h;^CV8joNC?;RRzUNX|νZS)np1)h`4N2hygixTyn*փOQNB\+o)p)BQǘ,vy!`uivj$ NvS;j ֯IK&#syӚf%!V%z2-V>)C3#J":ۆyXYDH!c,5خ\kȯq \KwY[T^ügc[-"@&Oz2ָ$k!fd! 'cS. As-ZXbNH(1.r)l.Cq, XEx A2ha4Sڲ(2Je|Z{YAUyjЈn+q@epɾ#<[ǖ l K+HL~7&};l"5[=[3,K%a"Vnsd*3Hj*8\lz8cE(*z0t-xq2Bd)nO|;6Tۂt1> P~e z){|q}S@1̩x%rgUAms(`x. "vUIhC)"@e;p 0#5qe(hݸȏg/5 ^nʼn d]ل:ڈۿul nzXLDZg,mh\)ߺܰ8`a JؽH8Fj@: UYa0G]>5z;U>3,b+%KdYRNTF*HNTZ&քg^Vr?:P}[[Ӊ%mQ] S" ¶gh!U(`ZJ"tIEno l6ru1p Lv:!=**>=|CzSlm瀖 sF^]:X&9} _՚GLΠڒ̎'~Z@j>pFaAHԼiFIFDo^s#ǁf3T{.Rk'a9&}@;fh%n Z9R ƒf B_B}lF#p#²%eǛ/EUo>|y+'m~,SZCx{U=)IQ䖷+_l /9d XwŚ]1 /l]O2~InF.jGϾq1`+B_ $#Ы/#x||{ Z}VZ'L~63S7+I'bK~{)|`|!b5N1\g0ߍ`l>t ON6L0J)YSm\H#2Y˜7k%E4j_+{R=r"{a8|u j k5wWӨyjNIʆ#~qH/Q6Ձ':|@"P?B̷4JhNGZ#InHI.Qf&#) 2{p8=\8FOcB)7j gqDE`A0-AgX+l Y&"D4߇UE{qmw6vxڼC65hwBؑ3dvԼa+GżF]`ӆ v5ǡ#rH.l J Bw"j<ʇ4? ƺ7ٹYy:B1^/TSkZIx#1EbܻEjpA3M47g=A$  P3Vt4aM'19Q7|3-,,b +S(Npus`fհ;YI(rVEi 3zG24ys> >x1V_ 2]xw뻌u<, A8gh*A.)5QVӊm <|ha[,/=Kҳ,/=Kҳ,/}ڰgyտgyY^zgyY^zgyY^zgyY^zgyY^zgyY^zgyY^zgyi+c@ҟY^zVҳ,/=$h#ٻ޸n$WzdHa ffH2/삟feIӒOj-e,j>NU2./0etybtyX)1\%a]ǧLD r2S JU`^D` & YX6zHCEg8~u(>ʡM+$ˎ)AjҠZݨVV*ƉFlm4[Flm4[LhD/2\pj^ON/=l90y+H 2"YgIFv=uS>_@1 6!iJ՘؞TVOTTRgu=dTϜ[ N"-wwsvSh|#)S,ԏSt=(cZiDL(.'=zf*(*e)0̞3PiO^t&"xكv<-<0ޓ;C`<SJOxl^K[УλRhh5lv2$Ԝ nͼHO}wh$h7jD78Q3D"B8R,mib$؛7gۂ5~e<[tҼrr߯IXz2(JbQE)B2@_tzϬG]kl8^6mW/>n%ًBQb(*`^K6~Ńn_Pw94O'`J^9i&j*:V -(ZsSsUt~kk휭|?/^5~s/R,}v/oumh|}M<߹UȋUyOd%_kM\#-LMP>\'ry\._6Sb:>Y\2WbVgӏoê@#Hv=Vx.\.-ŋŪ xc o^;$ocȍϊxW>/GԿ^pfp~~.$i,s*º}]Ǯmo MSW _ x1k.{ty_Xט}XCPG!Y[1sNKfb'P.:*AN8ݕ M+2w}7Ǔ?#Y٣.7O^o"c"&]}9A;nd({7ɬ'&we$3<`TVis*9U#ntc:sѽc nvۉ[$ rwЫ ]$w8ϱcns@A@_lnI \ߢpϿ/aj.zWngn-v|}~e~ EV^ wKxD=h6#G2^ɐQ^94uJ;176Μ/Ye;,vX#y,d x,@K# \7[o*l,d=# 7ju3؟oN_-^]tQ~;?^nK#V˳[s \F eL!x`d9F^7@/b6^H.@9M<>$ )Ui d,dKYN =[C;xG*VJ+Iuٙ}4z >IۀPV&ϒ lϑ<cu%j"h%<&eT~ ={V Zb\BRuxm&l~m<(3RfZ%رtr =I(ٚx&#mP8 3$0 ֹ Qhl[-& =[ۃ)PzL̙RX&l~൤Xd Wá:D%LY{79zozW yd_ ]E'Jij'ϐ<:|D9' r2ߢYL!xzQ(2i}"Qs,(gH8 ىTShaΑy)uNKĤ~nH^sb<DS4Nx"]Pus$oF92I4nAffnRM)zw;R,PYh8-IZg l?-xrWC+E^䜌)4٨)!xvaeʲ0/l[W.*~I){ ۫l;p8'>^@5U^4Hޫ$Ǹ8GQ6]59#"/qJBR 3$]c;Dy)jh{LQ iQ' =k}¾$=oA-b*Fk:a~7óM"WvRȃewUZZy*o L =m21/WCO)dk?CBNboooYqBV|eۦd'.L9z'9{kocñ˳7?I(: mM3CWV-m?UmmbfHL،w..˛ӊW,NN}l*s*^ICrVASmdAIJ:veb{[Is$ӢfAY 2axS%oͱ&N5B{B )($L9z7=&U6U K밓d40GBĀR %9I9 If oԪ71`jϡ2Nu`wŤN? =+Vm< \( bbri =;ezhq6K@VA*`\G6H@SfT9 BVĺý9;d؏&)JLN[" :QcL"cz1GB3Ln JL|UZ&&iuH^Wk%A2?SM_RE`M6"Mdus$z$u)ki{@S&9cQ> 9s$!e29-aj٢rZbl-6l~u!k%GU976?GB^9P=msqɢGJzGnȡ/I8(ʼnȣ$SDV7GB]io9X/58/z+ d7N֝#xg}جnΣixwxvv?ݿ8Z_eFM7?ۃYM!>ywֲnZoeW5RSNE 3vZU̹vx2fiujHHDWBoLiuᜈd;dm`nUs4;v eYp6v5$m9 ;M JfV boHzღtmJTq^5j[dvlԊۓ9OSjMirTS brtF]M8G!xGM)tAoBZW$y}Yo̩vV& DTHCAD9`ΜZFJSl ΅Ъvٚ-zV%Ձ5ޕ6r20}6k;6ǁwB_ I%J>R^Tg8)=(.k]HI픉ӌגI)̄ॶ"M_R.rRWO0N <,r3ݼΣ]4X]d6$x$1ir^mYo<[=zo\ 7 Q %]ޥj}uYY,+y`2OV$ 4`脣1ѠIyg/ZӸFP|g+e.&o77׽,ITg".هbUc7Bv ocy\P2)f=#Dũ覉Aj-Q5Vm2{9+̒FU٪" iuX#7troX7/?+iҵ^+BR,m@ &tɈ-_stTLZJ8V+qnk[^c@0y~Y/2)G+@+TK5*BqJ@,x2qoZΎ|Vπ&- ?/6GɔұRPݜ"JLO OIe|i[P*,#Ti,F*}d2*;h=` QȾ![%Ao'(X|"(RVEYV;Ӊꧣ0̮ =eWq;s+:#+΋1 7p;xpuqkOj4z[T?yh kWq^ j:3sG/@Go(`Edwi$LW,T1^'UviZs>pZ`c#q2>%1%>g\p6xugv@nƙ>1s@ieٽ:M&ˠe!'ª{~^{ۛf15mdӤ|B:<"qG!2gsPp]$+#"`.ߵ$"'/&ZbXK*t+\ødeD._LpoleR111gGmiE6:o;U:?%SbGeY݂}H2o#rm$ I4z@!#LN5ՂJnI,*.U!,@h|2 n1Ye\JLks0]r0T82XAH!APA<=+½LYɥ ĬGWBϊ7ˀ[oi J9_|_׃i{~ ƦIͨA*\Ū]Qi5 -Р1rZ.Ӵ8]9x 4q!&zZ^,Ε<{G{uַ> 4>?o 04 6n^e47D (Ψu͏Qrv甇37k?x6>K[Ξ5H! K*GwUn3R}" }TY,G.{H7UjL[x: PjFX5`.[5@ ?!Eо/g7oWh>_"  / ,nfW?}x Dٓ+*|Mz@bn^Utuɭ]ֽN9[2{zbDvL=lxW64oS1;ֹ&ǘR< ꮖB}8r˶mk7ȕ)~& ujh}. u T+c.xV8Ճ;;xl \vיk?̃U=YI1 VK AHCKƖuS3%2i8'AEFDO_[L##[զ29hFs?W1@\ѣg@a`e]bK"ȯ*"ow/;|_pW}yx f` r0ų.zp~9ko/ڽijڛ7͍Xoǭlo""(?~x1'a_{Tp'E鲏z8n_~+s :@U8!͵ U شsc54#Xk\/faZc&rAy6QDld6RCs۫y76SAB^Pb},pɌg$ZdRPreTpKcs h-nE>Oumt޺^0u%zfK ]2ʺϼe$y>$K;Ǎ41MtAciQy "djD{%(כ9w}9__ V{OUS(=xtJ|Q:ip~؀\u q'9rw+ZNh^b`,،P 6YϵQsߥ_Ƿ 0Ƈ@A9NɊsfJnȆUC!լP (׹ @Ֆ|2,B]wP(Yuk {N[U"sb:eidZa6j3s,Tg kdZw68[ǰ6FlXlεܘf!4;p90NǼ0 O?gbE86ϥଠ-g#7hM_74 L)+myߪ642QK-L2d:EYټ֓7~3`wI &.CtR+e< ,M(b0Qh"9]Y۱ߧǙB5e;2vm}?:6t+nk ^I7elqNkT{n.kc$Oq))\Kd)=`,"+!d`$PO1&d潆V:w [''u$ ] 8GLЮ>gD 侰Yw@H@/}:^!Lq?L م#᷅A͠ 7V ؏+'Z2ÜL ';<_ IO=jEѮs~prƳ%1r8UvHJhM eemD^ m3)9?u1r Luho=A퍫CYf@  O t- i iiʥmiU9i穥13^bJAiR myB`]9V_;e QYis23 e%LBp'NFR2AҀ}xTKϻ%E擓>hEJ"r$0@EctELМ#9Of5q6R.Z3Gq@"!O IZ[U,K#&% '-Yk9ELԇiBTFdJxX )O9#&J%EfQSk"`^((Kc6YsF>}dV8 yO8[IbZrAbQ`݅L{b@[VkP+LDӓNpcx$$UxKQJ=c+pGk/wȖC~FɸNKV*"eUy+ 'sH&(~3;qF*ٱXEi%|-bf1pGCZ>ʲ{ţaIü > Ԍl|JdeDeDTDK K2MjՐ7L=3dxGH`? hRC1F4kRXT\C.yb+#+I͐}Gvbv6/zBɐ[=32K;+à z#(wR;3=3=o{C#eR Y)CQJj,wytc]=#sMͽ#=V*'mtQN%k%'\*-$rEi3r!=jq*7_I1P_|\ƤS Z=HF{(?rzi%VbR9c6([8S*JF u^4Wf:=:" )ugh(=uG!rFA GRbrAs߂=,8`iEL4bD")X-Izx0uF,Rg2DOؙ"k/Ap84g@Ŝ(aH਱G'|4(~4 ښhciXrN_4S_)LSL(RX&ϩJ:͔йq_Aݮ6i;wOv}}.wt1<_{XT˶jX=#ܖn^[~-?FlO- \iݖnqT[~-?ݢt[~-?";@ZRJܓ+rgIoI;/#٤8Yb>``EWEl@*'ųz1_Kc+4\edzٚf\7p~m-!w*R_Y/6j}Rg+vآw37e~R@MzF.hw:iRehG]ie!cT;0poJIreB I, |W>ˎo77vI+S!.VVKقUk1[3>fwSVX4o VԒk; IՇۊ L!gК[5ɽʦ齰iKCw5,)O$ b$clZ;Oý )(w;UbxV8>781qii06NGԥs|\jm)Fq J46⥲RF]/X՗rh- ?{^ 0;'xbũ&iJS8Lb*:.D ˬgZmLr'u=p,VpE` ssd2*/S֨sPIhr@b`di&Aa.pDRQ @akEw \(RFtMRYZ {ac+͉n4z,+aߢr 3])+˼frDpw^[eԬu\3O\$Rny@B n_68qlp76sx<>2IwφP<`Ḍ!>gγ\s$xG vmJc,t&_z|8rg=rIF'Py\2 `Xy fѹ) LVie)A@`ЭGgMscr5o@,k-K07rqb5Hj*vA{xP@J1Z%Üjr%9H:|e'fsב HitLTINF03,*͸Jf$"J %Q X000U[&q o]+S^KH4W/!%qOj;8E`qwmmL% Nur*7:*\eo[Oc!G䐔QU@kCQP:,/\`Ο>{<~}r7/O>Dtz xnGu"0o'B9}ռjNڪTMV]M]PPen? ϓ-b'$myʍ2?y$yv"j;D9c5h0R95<_/M=]i9K1w78׭ ;C2^͢Gl%b0JXdqDn@u, *0wHsIM.]:IH[w@#eR Y)CQJj,wytc]]@%|G&#]" %2Iqe ZE J 'i4™h Ґm@m7UdU5:1`kuT_>AΕ-v97[6J!۠LnL )&mܶ^16h?[lWDAGAʐX)M}8Bk`V@ЀVVqR1F: D",H"q b (Xe[I'- p?ipx{1\q/#.#_;tttK6:㥟_-9`_s`hn)1H+h;pԁÂ#^FQ@C.F!"@)-uܒZ@Qw.{&8’E*L<)/SDc%. 1%0li&zp6q8xXAQr66D6LsRzy_ӌ/-L<) 29cBJDQ%K"N 0MV&=K7hƭ).Gu`_ r.9P|%廋o6b md+a6{*UM/-ʶڤȮZܨyå O=cF@1&+s4Zv_v3r.xy*Wܸtgam>]a]g7\r'Pl@eɅw7z놾~͎v+w?݂V׬SbFK>iݬ'`ۇ>t3}|-tc LU(Ue w[5;ԃSНA 5Czk!aw|T3bAꚡ2VUf %Md.mcnwYf.~d#vi:5TR@:~ fyFo (N-Dީ0: up)e|tZP`_,W /泲>:.p0P٤܄[*c3ȌJ,V|w`RūAȚ5hHi >>m>,n6EJW, 68\$׆0M\9}2s=!O2?7,f}X/˨=păXMt / mc۸mjm{βa쭱g%6?n41faMMZ#~z2i-PA۸\7JX-z2+¤ @)`ᦰ%n6/eMTzld6uMkQK͕ QWfpzk/iM<}N n p5r4-dkqQFxP=~體"L,R0NP[wu[j|Ro+&f}C/_Hn$N tۯI@aN t'&&.JVEN t':՝N t':"-@I@wR;)Н.%\)-_Ln//h B<ȨŮ +eBtnuS*߉^}wLJyL{{@j-k3&^d|)%Ms8WNnnRk*2hrfŴlZG'n-8'ٿR>g fdm)ImSgCp:F|isW9:~aޤaZO/+@M/T಍PB} GޥEKO-r"ZA|Dd5;2E{{tI'pJ!;rEڿ.nµ&q}XXd,p`tKJJז_t֓ $7yfQ>3v]κpٝJAXEC#ϊ zxf4r%Uij̘p~aMPG#UOw+[\(K#Q-ٱ˶:nA!3s&@ވN,w3(lt)CQ(T2 KX qHou+¥&L\QKY&OZ^jA Z&&eV}l_7 R"$IZ 0j—׹ _'kO_/:Fa)܂y6.t՜*kT*-2jRLEgVb0@\xIȚ9*:|۴:fTHw}"%1dbvaQiصKD W@pB%rhO*!MV~DrFLr8{ ba`9'#^D0ʇ(;? }_Dr:QQ$u{b켗V"Ѐ*b %X#.Djw"եk / qD i >j$]]?~ 7ٿ˟z`JqVb1t|<}Jc[RIL*x,:N9>~UQ 4Bbi7ؼQp'0˳lMn0XEͻU`(  }'̕Żo"Q_[6~ν &5C jWym5Uey ևORyѣ:{W+յNYWk][ 蜚:J[Ia2GR֤Ct9C2gr? e(TN- ˯3׻~^|:NِAvE*7dFQ!ͱA(55<_/M=]QEΊS;Rnf)ʯ[.}5 _㵄AFIB1 O (;%AF=iR;څK{?}t ?b[&%q(%2r'8F{;̃kwAwݾWF&0/:& D7n@Q$`qFGDYVQr¥B0I(p&Zn4$vPl@[(ҚN,K"URr ĆhlC9nR+;s9:ʀ~2pK0`Dž; N!]ug)"?ȹ_.⦍MX̵Q*g er+gI1^l-+[$wHl"!yy/4Rq KZy%@⤝e>8qB-Ί`߱\tHԍrp{O?^\;tt t9BjghfP~~˶B䌂}#"ᯠQ f3 pfPLvaW.&~| ηj.e\TFv5d1A0*8ʝ7\ 3f4j|9#`HbaJ۹qQ㾁{뗚};YOu}*%X#f>8*k;w G.6A}RgR@(\-rF6{ 5]\P}(Y!D HK A41( B OƛdH @ BQniMiJJpp^?{WƑdMKU^]LLEL>l NkTDʱ3}_uuPŖH- r.v]w<{edehHh9^G$xp]mv5O %cIL6AN+S.od |*WcEQř8sc2 cBMK0Dъ vDŜTıvtzs"O9ZL JOTdD)IPB Dp,lfF.S>s,;A"LLl8kBn!Tz+m(AbHAr+( QskbOͩjE,% ||.?k-?h9qdy[bZ ,衸 26hQ&G=3d-P3]"Io;k I|`iIa|NDzE GGcbg,#<dAI8)Lɰ@PN{2xf ^T2y !eR͋6>| ų6z+4LDA'sm JEƕ6i5p#Ȩ|٣;X:O? wtF!1wȯ|rfGp'䒫1];&RcTTN}Pvt;__Gmv`#7&ež{2:#:f#n%D\lxaEr@l 5ÇԺRO"6 Ji'UJ/Ós-iT 9G&WG i;OD~#)j&oH^_G?bY N==o`R>QUqi?~_z^kh;&7z}؇R }WC ^N@۶wDʉL}奅ŒeHܴ8RnK/ R:x=͸jm5nPxGzH M{$}ߦ鯧b/Yiɸ g&km,?NktZ(lR! !D Va ` b1_{f~{GCM z'램G{8*2D6b\EeHjT2[%.jruYBJBHu),/ŨlF2f(=Ƙ{MV4:l8w^BTCញVN։;\:N+Mh?rR`}},u&:>:ȝ\u gD\T_⢯/著`{gmG^['"T{n;z!z׃m&Usk܊o~cPF:;5hb4Cg{P\x^ O=*JTGIH LfsPdZXV҆]A%Üj3 (by2RVjSݏVl|(oc m$ K)ǞW"QP1u@k)r@m@c pt%͹&H!Ѹ hTwOྔ*!o " HĬ[OɶLX:hP1r7.9<L %$XÍŒʙ+_\{j,մ#my3Rt%gEry#PN5 NN]HVHgqԤ,'} Q|oMV7;8J7/|[wH;!Uj|>^0@jxu9!^>}`W2?eZ EwI:) 4_CʑrHĬ%ݱ6jÍi9 Ji-cWbП):uiZ4b}:x ̮̉4}^!Q5~:6W;L؊=$7tXݍ+Y&Ĵ1ƍ|,Xx4<_E['Us9uR*ҜGJǒ&|q8w~RxKįq!qt?.߾<_~8zs^%iohM(! `6U߼kaj?:Ƹ jJwO^IǟɲrsEڤ(\+4r"r,a ܋IW߿_Ԧ]Nb#B k/j|\@ey3&_*u}I=*^hUWBfnghG!p'b@P*b]).e c%Ϝz/J&\39@Z**Bٮ .{v N.vWW4LxW2lu}kLhW?^xE@ ":cjD&kBF&-81c1c1⣅1*P%SQqv+@@W(kc zEFFy:IaY cU疨CjhG<\{yt$aR}IRLʱGG[BGB.mwR(Hֆ)Gɉ p 4_2GG:33ҁP =FDjN?'|kBQ"%FM@jМ%d܅s J23kQ¹q7>ylL3ɮ0.6MLF- %2F1>qir"qNsKo <^'ӟ^>" fJO!K &!CtR+e"":˓6bL9sYDq%$JsSw:=WKyrZ5> X1|W ̖mf ߟRdq0XPeGr`C6jhl6FX(Bna =='"=7ӦF\t@Aܸ֟\~#tƐL,&vPKɪ~EH-0i) oڹtѢ#אmH՗ s2R`phT "d%2AR2A%t'2Bzm:(I>rq$!x˘АrDNiIຎj8wߓKq~,H_9LZAI7\8P̍Mb 5:hEI;bN *qXx;: =vƹWqpg|rҧ-Qx%YɪETdD/iC Dp,lfF.S># id"did]e`֖ dilRҤ}41u65lfA*=ޕ(AbHAH)( QskbOͩ8G E,% ||.?k-?h9qdy[bZ ,A2fQT(Ǟz .Cc z$ Ғü1#(X)L3 n&I`U$ JwaJN3ZLDpړ33(L;W )얪ܶd0+,ѓ]aHdטo yg2 >Ie|8m[P*,0H[QKAFsݱa|yW3 /wȎC~瓳6o_r"jR7MYۭWl{m)*.Wq"zW]v)׳<ի6s]dn4C)`S˫荒mIwd́&Z*!>$us{["(8'8p#"A,g"H0m2/E/8&-&c#M)Os?K#:*}I[qaX.Gm&}k6ԹuEm0ޏ$ ?/,Z Jŏtgz:_!"ۓ^Ì!*1HYd~WQ$En^R_UwWW $W&as( cCBMSU-ɚl"}NX$~Ց2X)ov.'B6A F@&%8ZS+V}JB.8Oq|>~uunVl :@w[4[w`2'G'2|qboڕ_{FaL 7Ͻlu]7'kt]Q%7&3#2F (łI Zc޴ms9U=>a2ƭF)&?EOz\V#[gFg~zPj' >fx%ƀ'\o$~Xvm&ᝮj Д!pNxF!gE8Nz ;P)PXVWRrk^=,QBaY R8Q b8R6(&S FƘ2&gƀ.6{ZDS@0GcVhF>!8fem~9dRltdrXBqr 'XM1Z8|38ZNK S˷냂ٞ}@np5>p׮öq#nO|1 5#l{2.z_=r-zISq9cɯ5œH6UqUUF•Ԛpo^*쮌$Ա eΩ".DGƩL$_հFY$:6a2ˈ§Sx'gD?8)[>ٹOo91%iB=`䴳FFO%62nu4[mMr=)ZK Ƕ _hc\Gw{sV;_ǫJowWTk\x4j,yAxU /)\PTz$ ss5_#I2Q9kfKcT/nlV@p 1j0k hacaXw|sWoz:c`!?*k9G䴌cuCaG3w/"(-&)` ++J\6[QI:τܱ 809m_nEÛ`C┧o$Wv;V[&l~~>w>s\T:⾞=v2۷ɛ(>w}%ƌ6wr, <d9Z;w"MP "DG#q` rZZn YjEf f2W&fթۡ?=!jpt@6DÓ)&CFwFpe66Z2YlǬ_$xeB}K+ Y Ec(eXx2׮m&lqNx@"eX$풗\3#=NR ֌3}0X(|8D >Ԅ]-˽';|FHvgf-`eHR%#֚)8̀M2(Xlju$&JF`s5?iÅ7ZeZH_y(.i,[Sw!^/u,g䙿BQFYrF `&5hA%HCLNԿ&grOE$!>4+#D9͒2J XR 9Ol8*PW@0Cjd]X?o &1gɞKL'r6VliHEA\ey VYnZW@bks̟>TQ]Q ֜#}[QbJK%#̟~?^ i4zoʂ7Xakr]g* _abT"kPVU 'ss樢\ER>į[^-!u~>=`3f٠?k} ?(;5jhW}TIc58޷i9 Vzj«Y`303K:˫23ۙ Q뾽TA@&ۉ%-IB▮5C7#ff,> Y̻x:u{bpj4R ltNOZ38^#aޭϏV/8#:ۃrW$"s`jWוOK'D5)!%Ēv>.lfҮ }>Or|P&z|ZJxVGuZjR଍C1 N2쬭:rp؈f+C_\]oMv4up\)IbG 8)Ŭ>SrxXbGSmGFƖpYBё}+#>sRIJ+&SZX" q@ A *2ނ +3N6t.-y6n0å jݠj٦Vj7>zM}ק92FrYBė_'VZY AK}I0|=}40>z=6Blx,=ʰx4TvLGjLW`[c>빹qƿ~"*c޴ms9 =h#<4uz Fz\rBmRm!& Ԣؤ빠j==4#egŻ*yifXtGdo\/k4ڰ { zNf?uWʒZ2@Kh-%d ВZ.z ВZԿd ВZ2@Kh-%d ВZ2@Kh-%8t%d ВZ2@K1'<ᮕ;{q, W5i`oU\95,i;4>&DxpL Unu[#۞4F7~+Ӭ <`dŖWtU~6FRnO.ii'(q`DHǂry'&椱*`FH463ro"W HW7c4s(+J=āGnX? -J`p>q_˜=xqZ2)P180zMՎ8[(u5JDFZmՃl~?PfR߻Zbwߙx(i8Mq+#{! {(ζ嘧h@OX]QVζi@Y?4%w~ki-q,iPn-Ww$֌"H3F\;'WW⽧ u]H]ܔ!( ]9yaHEgʑ_C2w7:y̦JURS)|В왩O"i$\!eF7p#m5[MVӿo5[MVӿo5}(UԫC>8z w:nn⯤#;xꯛ=zYGR1[Hɥdt#)E18-W*qY2'<]Jg7y6̃? ԱsuOX,~ Jf6n R7JCoo'qr=ΩMm +{Gwroמ^ti7}n|(ѵzlhF( {K_}<Ɠaqcջc>FW jv޲ ow 7FHEgT4ovA)4[A̮cK j?.YY㤗kQ_|B4$8Zq-veMwsunBrINMn% tQ/|ݛ+Mf|.R"C#{_YpMd&Ȍު(^&@ Ґ%8T-$BJ킴.Eڬ*t,p-",+aT-T;]U茪d-پf>?#Vo L)'^@!z" yoW{Bm64Rr=9gW{-QջO~2.%4^FcĠ(UʒeITd.NP&3t.3[-]ԗԘBp}?C^ ǎ Ig;bn%D,7Q "TMŃ:6kXKR9LVf8:r`1%蟇r߻X|f8oaHޜS9e L̤6^mnE x#_S`PqtW ?(Wꗿn(Hӝc:;8~dPHƹoF ח>.W ,p8r&G6>Ҕ_>|.Mݲ+sN;H,uJ(쬒EL OĎnΜ] téC' ]sR8Ъm<*KxГpPe,g].W蓭;DrL iHfEDRrZw&qga$c"0/Q\Z8c[e%c:F`Rd4K9K! %kZ ʂSƐX#c"{,KLp" R=uJ 53!1}](Qz#Z%Sn#d)#筮s|]5N1#Q)i h^EB!҃ Kҏ^EcQ5;'+1 "mkoprfJ,[}a,&k:%w1:wVԙd d"s,#9!B"+%bJ\$n<Jq($e>Y!@W΀䴪j/ju0,k9!;h^3Y/*QTLq"cR%$VsVa'U>mGBjބfgQO9ZȁE(vC%c9Mf\vXvvdRdiU C`4!Q1̘vV;G®|f"ޣQ v"LF/ 6Ze[k LDL켴\FDj˅uϭrt9qŬI@KUЃf  ͻ۞!yK jUZa>9'7O:-~?Hs{3"옒#;$ 9I$}1(8ALdJAXπYe=pTZ8fipecD_7щŬBH_eW=ղ #bq/D#3IJ'0B8\aIC`:h2dY[a蟯X\a49TI'm l[0&`zESo,GG=XuesjRЕCAЇYX G#늓^(OVa6 #^~y6g[+9'y7lHbnlGMj hEҌ$#^'L0Ek-+i OHot^ &o2Gsw50atS`[+G\qdai4Ns3{٢Si#n|ҹf6=wS1O9FYz>blǚmR-Fsv G_xMTdG\8Q[;p+0(q^53OZc噅ZvR^Z)>*7Hi!bbZ,QZN fYuߦ1M{m7s^F#)R,9eڋw^Ay$:#f h88@5`r(oAe4{oγV,\9y`G&S<0Cj!&h/a(#=Np NN 1*HE+r JKrp8\)4Tai!~`\Jȗ1Ni=}]}ym_oZXOfżeT((Pa3^fP\]ȺV1ޥWsN+_/Gp{69P-oo-6D~UlYm&޸O$0[{7oݒ@: EMuFrS̓]IbgA. !JPN7?5~0;A_hXcvfv-?Rw7:Lj_kUv6r?/: 3jxd>&ۇɤw69]%E?=LbAp>۫9=WvoǴ pu$:Gˉ24sownXkJYD# 9M84Փ/bzI5CmV "lj:6- w+m4J0 R]~JH~uC&(uRJ)gMg]][]A( GիHߺ\XU\ e5ueWv@Wd20е6u"EPKH+*RH7'$v JK 2)s.J6iaT6i&,c;8zlEVmYѸhb`"8Zs!vAZ"ZrfMV(`{n1d=_ـs=`;7UR ͳ.^iO7ꫠ2ϏO=ck%| 7<nl>_* KdAJ%Y2c:_#]C YdOO"mvFNC`dM^HQ9 SX-XF(r) ǎ/+h1nX*XҞP }p$?DwL( jTGϸ:U16cS<܆/%OťfZs(KrWDWWǽ%H;g'Vv^?{3?.WwѸLP~29^\JFg =rRKrh!s#w;+ހ^?,AyZDז|>ǫono?}\F1Y!}6$QϚbr"'rB%Ɠ 0m՗0I~ܭ$ׁ, D3;ߺ.U^\wn) sfvS5[39=C1fV %1[/KsVR4HǹV.V94:Һ=*c{;pt8fˆΆh[PfUjה\ULz/(4FE6B&flR2 5րm}GJ&2ag逫 |P5ll h,YZ1--l(- ad>\>2'U}>CIvԹ c0hr QuV7ɿ03^R /ߨ`&^Ю,b3R.b1|e<Ƕ::t}F&lPERI4S9y hC 5 ]J\~ %=ڠ+Uj$JeLa|3u(&uȬZϨ$B(q]zr:ZrJy0øv#AxΏ̓OӲ]6uu£w d~.,赁lYb spki9&&l } ]%ɐly}g$I!)䌺#*͸Jb,#FW'7x=ྕ Je^Jp:Ŭ|̺ t17&,'3 8d VFeG{!uѴ #9M ;X' ^ZOE U.#`"t,#mjw*CWUr`EeV'#g?4PMT?/ckrZR&8ˀN4go?.&?g *]6,^ߗ^ !|_83vBÐɅbNdld2xu:UpV0\tbm&`H.*Wr¾yo0xI`R;gOy4ևOi)>og=YٻM,NF:[dݬukM|Ia3Is  . }= 0fIl +? ([37(g7r| ݛ?K}OxsL9k8q^Ln`'u 03 _fSSOU]͏(rM\ }3˘J7!fqٞ!]ٯ@Wn?fIW"EE[ B&Q3? Ǧ|!B.b&s9RPp__~njN/}>ڥ8.t|cQb$ g3EW`7N[d *0wHsIM~uԏpw?c :?b[&%q(%2r'8F{;̃kwAw]۞$ Vzޑ)[G; 6 X"cG9Q UpLxDm I^v$s["%TfT7iD &{tU~][t ZvVѓ0ìGABr-cps_O⢉Z1Cb@A<^AuUUU{u9Y1T0A0ɵ 8oB!gh`'1gz4‚S kLS q~ê;4)*10CRmr-NNۏGR 汫/qU`3osCMT:gF@ N*J}EjpT;gA{ryŘ):j7 ]\ȵMRǙLRF&'q1ysx;v&!ݠgf1 Ûn{xyIB5KopfFbVթg5 We@]`.zͽ4g˟a;mϓvYgǜ=Ab )hh`>K?MKn"'JQőͭ$G!˺.;ݬͺ~D[;]ԭqvT A# QY< eNűओxI^F#QlrޗBSU7۞KZ|9B_^M`/l}1]n},Շ`IX{G^w @e-4{W۟;U AK\2+ic-G< AԵ|-%WrR/_ؖ]hF,B0#BXYL )#"b1h*HnBcϠ9q連ᒋU].Us׷|n;K6UKb?ë41U"h@QAsglrD`p'&5R'|9rBONX5vJQKtJe2R / k%0/sL;Ɔ2C$5PMUa8[Ň=bR6 ^vqo!dc0pNr*\&QK~vV”g}2-R eSy ;OvvnzJw! 4AH ,-bJSaO}N[i?jIgβ";=0лNt:pĸW2崘^LZT;Ox"r01kS|6Mre`K&C c ,2HdA>($E@L]϶]'ډ:vBPq| vOƃo6WV |9k0{=!kZ+B ] t 3' pF O`*ZƨaH0t kx0p,x:mssj=7y~٩'6nCr@ELs@sE<h)k JB ؔx|Hu FT8G"W Bp@05Ζ|vxn]*<>d^Gω`0NQY*(AsY$^b@1̀dcsFXIy._ -kfr ic-8b9INY8B bx0"Rj5ji_tv~/pH FSM; ) ?*E0>r#}VrD[$f* a8l&Iz`Y昶XxHDU* ɘ0A6JAhV2pI2:nk#Ҹapr'|ȡ1`OA% eUȘnۜ B*B+0p/lأ9+cQc7{t^OeCzJ^2/$ D $t1dPW(<1{vR۷/e^0^`2y?KX *K"~Hġ/9 qò}?ዼ/~QSs{@qOgYVm5R"9NTg8 _8+`-SD*VNc Sv1^n5o7կ xQM@s, ^]|o#$W# s K[UqT>.u[lz02o[vyVa}~?Vxݾ</20:/l)%N/c0gKwOq!]I$$)1xx /)<4w&5 j.u;P'1A; d!Z/SVB wP`Qq 88})eKIYŇ.pKZt*is~1 DRix`,(3xVɭ`)&]>.eX[eXeXke)CjFJ$ا""E%V[AE^ +P.1T)b+t@H-D$)Oy->cme8]fPY քiMYeŖ ,7}IδOP#{pN{:Q̸~2RNW~ߌePB䌂|CsKFDS:hʑSFQІ\(rC$EP%M%zxt~ *GXHǞ "k/Ap8m(!s$kfbclV@&7Y(E.c.[Wm4+o@$Md$*2d5rƄ֭DQ%K"NTz%歴C]xZMa4Ԅo.ᆼm8w;V od0lܾL~?jӚ*,qD{ J?Â0G(pVZϽi6RZXG̱0!S FҎY<, | ɪRdyttSlY qp T.b0LRaBh*H*uS@لɗ(g"NC(Vq$\@KeV8e ɮWB6x[hL$xX72aAh*#aysgpNJU)MJ F` e^hĵ4H8:;$|W=o^ki[Թ R+nxu56#XJpKp)z:Mg(ѳN ?7ϽB_סt|K#SE1c> Ë1Z`RI*0m56oߞLƖ ~XQm{ȪbDO·>fԢO9>?G[HកSC5F>0KOrƉB~Ɵ7_7f41o9GV5x6Z#]O<)­qM޵6r‡#"bȮd!Ub"iVO!uF-`83S]uuu}˲׳(䕤S0daM%c\P siQR蓟d:Ϋdw[M\dhl2]ofWlS}6,eycS⇕hhreqc*h%{gqoR0#4'(xAcmj>"^)\B+Z7ʽD'iʻ0T>ޜ :>{Yf^ Z}vnzʜT"7.Ó*Dɧ'J>UDjm탈a=}.f9G}(Io.:A<2^ &K=h( p R0]q~R39@InAl_&MT4V0#*(`FBT;Dc2*⮷ '.3܊.7]+`ج:ی0O09ƴ-dOS-oTRɰSELQVS:p3ʻaXoEp: +gL(QmGsdtdXDcZ)"V?"D4&v_-Ji\DE|Und;{p#Ju)G2Rf]ʬKu).e֥̺Y2Rf]z2Rf]ʬKu).e֥%Y2R̺a2Rf]ʬKu)Zf]ʬK_C8PjmyҤ3=±`$mAB /> ]m-{3n@xKKo;ksL/LVMJ$L\n!Ex2΃Z]fe-◸mg6:6 N@d{45;-4>n,ֶC趷ڧ^ B7?ݥN"a Ň0e? >;Fmfѵߝ4m+~zѱu4H6𦇢5']܏V]i{ bfIކ;;8m}h nBZ<14\6<\9*+n~^i?R1a--bע-D%-B|Έ٦4,E1״ͩw[.4iAys7W9z+D34GM}r鎢R14넭+ym4V!B{Œkم頸oJWV |#: NPy:qvUUJ]M=!13Z A41(L S“(1jfaLj"'1b[l-iJJpp^DPPҠmPV# $0Yff5Z[Mug{4SYHY/jY*%EFrFO6,Wʚ:)U(XiÒILS^Sǃ4bu`0-1˝g.=p,a")hcސ;:APt`QyF̠h$Ƞ`LmKD"eQ\1 AwBhvXwvҮ~fV*v3)uf!'b⭠)gxElajL=g$lMoEgkfr ic-8b9INYB!R1x0E VKj5ܤwN/: ~wr}`t*q4Q;P`(xIBV)"ȍLmDip88a0. 흼C6MvLYt4YL4tTboNǥ"8 RDFJ$s":)&JkhclCz.Y@S菃y->cmc7֝{uB:'Fú2;O6f3wɺݿ]<k{}|=zUkcD:]s/my10+&a 0"@"_hY"4agt HO$!EaGP0OU]e;]_:\u5եAǧALBkaUr}(|gpQ]p)8\[䳯ǻYH ~zrBJJ]w&p]r0PF.:A<~:6?P`1/LH} QGSEu/s ̯!ɸph6D3&CFOFE8~ޢHa Iہ!unv %uu(VF{Oٴ 0rh4`IK7i6. {d+_14KMo43N*egM蝅|00b"U`D2Y+냉KMO ` Xy$na~vˡ>NZ𼦞V?ܽ&C}@_[,e<,w5 3 see-Ʈ.OlX(^(C&^cQ]Wn'm˥6 v{$=:ِmbK6=h}U/|h[ nݦzaҔ4㛟_j2x[<F0Sv0s =beg._ޛ`863>sva]rn*:/[#U.LޔT> )%fu{b켗V"Ѐ*bJZJF\h!5kH66K ;ӯn*7BTָ)tu<4.0{7̢)*}% emHb&2oVt|C=ЙųUj0 Be0p>t{~},$Sk n.^T"]JA)Œ./BO+UL.:mq\ H X +9{u͌^&iPOR1i`e =Q34$R^Q۷ߪt]WWUSf**($p;]MΟ`!6 >F70*aSgfWw#Z1&w?\W`ϝ7_MeKl+ANaR[2jZԶ!p^LUY>H0I1yA4}w1KֵZtN-Z00a0rN6. ;Ce*@'~>۟~q`:?~wD\Ï߁H`R2^u᣻:rMsT޼ij.T(J?u~~];viiI^ g/2&=/Wic^ݣV ɱ!EQԋvݽ/E>bTńO;q{b)|~*^KX(ic4w@/PnM, *0wHsIMXOOu/ĔLJQ K0e(JINpw s ׎g¢41:rYEtb'a ,'mtQN%k%'\*-g6HCbހjd.Qj[EӉ9$RM*xdΒ!0:"4 %&2` <'E锨ăCI}lC{35HeCz/5L3?/Jɚl5&7SDK~>>py3=2TPSa6X%#g"O; ץ`?@zJ~/[Q4NScp@ :g(@,J,YíN*dkY+2{ut|[5U ;E ru>Ju3s2{ 7oUN2!ZUgY5]ךxGWӲO6tmΑIZ*Ϯ YONpߧVe&ɭuyi6KHQvU}}6r3zkƣъfO籩~Sf>;;_q,Ppn+w9[&i*z:tZ^>ϟϺN*bvB*J|pJR1$U!q-%-2 'TU_"\H@kHv$aQSD R$4[NI!wń3K.Uq#Iq6EOHD!`iJBHAx$1V3gwȘBy!$aH47MABK ( Uh@[l`FI ZeCt8"Zlh_]C7KDUAP .TTKp'fd T, b сo,YarT0QA9pq^mCTFHThBtJPMȹϔ(* ,'u*dvDDJOpޙ 2J\r`р.⮪i F@Ҏ1S"VD՜{րmb!9ĞU8eGos+x{eY udf,{YۮznTcǝ8uw{ߧΏOqxQ`Nn:9H >P1@Ct 5AiJ(q&e(%K9K e9m. 0AyJ<\޼`Ţ_3R'O:<,w'Hf'`O0N֝6{P'xyQ15ګ UI=I na&ْmc2]7z>(&=vCLN5W]nCRޒn@((眻QNZ 'k\l`e<ᎧTוh Pj={q ocTl?Y=튁fɼ.TSym}=}Քoب3*5<ޖ '-F|uaOX;*.Q=sH.r|OR1~2g_ \5hRT$B>@a~ Bb7Z#wWM/>k~; m&lezɤ;B>Zz4_ve"[ w+uO9]P]47ғ$5hǟ !h$y.4"DEm18pIKv(Cd(T.(CN(@4[ΆѦw|~@5Yug2-DNT@~nΆ1M#kkY^rf\Y%sD@Cv!'JD5)yUH''JǧO7"24ORlgB_K+xۻ&R(_]*GZm\'~V8Kc*Yq8>Qym ԧšֽy61⯥ՓﮧP7a/*rsO9#Qo8z=-3NG# BF5.a8<|D q8*'`bvFt 22z"FmZ+CwNc!d$,}6$FSzB{t Vt*'YuV{u'܎>wg}xF>{gzOHi9{n=l7 _7ZfCs CrTPPz}n~O|2߾Ȟ쀹"U6]#za'8sd䘽Qm 6xJ"6ݸ2[f#fr4&k(b!S65eZ1JGIƲӼμ3 .?kBpNʴMVD<1"ʰ c;e4+)}:bZ$qr I)n2H&OE4R>@ҖNX!ZG hs%XKy.vL(mX~]IYmrEeYP8¾zFxb(B66<,W-< I<ӉtXEXG0`kb0ʜ"2D$n jp}Z8TdΒ!0:"4 %&2` <'E锨ăCIlHC{!35HeCz/5L3?/J]ʚl5)7[H>>py3=2TPSa6X%#g"O; ]ݠïO)mw~'Q$)XRi@3I]R`@y)Us(Ebh#"ӽbp7 v/6_ X|Iq,$',o[eGm+NHtZSb=9HM)"/TE](e(ސul1C[V&ֆ;F\09pە9] [zotDf[׫{ֽiv˭{!o䚐75g?)b1$ꦻǞ4I8_uC:ix}eZyʛa2p=lzs҆7rn+O}ÚR5t׵wm6yze`M sP#m(6[;CBpN]zfB͢@P dѸD:9r:B!d iZ~7Ä=@Bъ&GxEG勎BDB||_]iVb >cQ{1yչ;˝.aPw}xX/ |oK̅Q fGdaX"'S \Ύ)e 6"P*&86H 6ḷ:` huR'TCD!lcd8$ĺې,mҾk rH~g5 7` So^ώfUY Nx\򤁓S󾏞6 Q75H5d C>E+@U_OEIl6!ii4Q*{ s!9mKJ^1RimB>$1󾀴0Q(IzDE9ueَ{C*[H7>9Ma{`M A'<Bޫ6C|ϳ];>K)ͰB $m\AG1dJTbA?hv\KCPfrb1h.5W&ITHD@Vi_@Nւʁ)H)phTު*YbEC({gZ ؜M_.îYo&Jol߃nn 5;>+M$d!% %YEOsȧ)@ ).YK%uȐ5Āʒ F%bcu`*Q-7q[Zn,:.}cs[rc.9"ˇ,9 %UG3 {"ŕ!%i\xвa:ʙ8):Pzlv1V$S)E@:%>m, ڕ^:ƒ^adQϨU(vuE͇dCe^2%(<$*KhBNJ $gyq8PI( kMR^EA**4& dD%xլ lYZMɤ08k2XX Xl R{/C`@, `-[Pׂ ,e;qzA1`RXBT!(= >$('] xS(&IțMS[@{6fL>(6:E>B49A,f(H'lCIh:'$gY ҁ4JAn6Tm-3HVm@_A6)8v0^J'%%"!xVyKFMXB:pҥ1*dQ&/h9cX,fJ93#:-٫qo}!!; YmDe{Œ[$IQ+)c.EdL^G]`-f ٶS8Xno0Ξ%E9T3$M 4kJ̳uγ‰& &'OR.SZYY٦HtB s^h*A`5ˣ{[zC)&_Qb id[+%^d@e9@&QXYHhY$y#Rhº,w09izZqvrHo8aa֨,A,lA,? .e6< 'ϧ5=MEv0~bjfHJX2RWۢI`Hz( %e2Gc:?u!ӳu1 z>k '[%M{ɒ>$dQ0

9Ɓь E0!i U CmolG'u3cE^06ƟN[ذtΗPj|u4SwsHdWE !#))hv2XU*VH;Fx{19z;$շhO  V/ْ}8WP(g oPe|3$G@kh^O_I]j_|O߶z9*јΟjVսأgGWto+<]sydB]>}hN,5P׹{i<bOIGRY 59vn9ׯFv>Wp^_]2j={:|'9b}U__|qɂE4Zkj xJS1V()*wj=)t|=uX (oޥ>*[ *ۇ-ה XJQ (S{v^v^lajDhfޭږ᫑5]ej3DDL :e4^Bw_qlW?tFO?Wۢ UYhf3BiX>ѸD:9r:=7^dσT:9)IF5\|,,&%A/?EH`W K/=|ucOrLl$i@x?^9Y櫗 c +M=":~`OO.\ۦ؈. *{bN2!L. [@b68Q\i%,&񠣯B@8 @8?*wz)ǁqǁqǁq!}dR_0_oĬ%olVǿEC>2\貥(fҟ\CL'Pmb:Ux|G]'v5%Lߞ~c|X5' z̾jp}RZgU[VI~w\] c#i n>+8ORV܋8Jud|vj,hd0ߏͲeiᎵaQ첟+3o^c^0jQ΢8݋y]6p>]dGugz=v Mŧk T_vB֡1ٺO<;41 eppLy"Y?2>ee1ǗB]_Q+/Fk/ޔ4!ve9%y,Zۚt3f>*.Ժ^5lm|F<1:E]]Jy\]̻ŰlTvӚ7ƱYxhٖ/ F >"o6#,`Ɍt F&EҒEVFk mM&mMbsY?\INL.2y*z6粭i_BJ NFPۺf`dF4^ x4vVߏZE@b"w-Kq$YWd(mz5mm]n?lABH97x( HK$J2qNK,LEk`JYrNgQ*)C*Rq4,.(9!H"hj*uoD $YqE1\2'%d >!Voad?!mG;2vzNsRt0d%pMNԵHMIcf=9b 0te]a(LGlOMw_KͲcnGtfc^f|@ 6XAE +q{U@Ql2 `-iЮK鶌8U& 퇦A5V-Prպ*-%>,\4 Rk~|P&ڞ%(:&Jڌ vkE]l Y,55k2+wXN@5g*mU 06%Lk쬐栝ЖǧdՋfhy5ΆguOPfs/QCqWzmRzi%BΚ1:'E.r:J]QMr ,?K_6=3翤Mm&)ge9ZQ{VBV01qƶDOD[b5 jCJ˞^؄:MF ͺbM6RQFs85M;tĶ1? 7jq9ļ #Y7eQ<a%tQUWRځlKZ)TMvr %;Ѭ/Z=6bt zs(7,E(ڟli$͍L!NOۊsUdlp]$ŎUrW`l*'9!;:k iCyp42]r_9 $ȉjC>>HeYeYeYeYeYeYeYeYeYeYeYeYeYeYeYeYeYeYeYeYeYeYek;@npǫru/L6`,GlLbB)L;iߓrc^=۽+~ﭸ[bGR>惞d3۵O3HtYphIMKާFѫEq跮i@ xszvbjrہa<ݐD*B= a|_9ic,mV߶}) A}|vY;߼'FXY;Kۻ C~Ӧ?TQW&6~9cwYI;>8YxLH ՗d*1GZS~Dk#Jٔ5Ie4Y':iINuҬf4Y':iINuҬf4Y':iINuҬf4Y':iINuҬf4Y':iINuҬf4Y':iINn:WίNw nu8dZ}D_-2iI?LJAXD6h')kAWތ)ߒt*sLzX8ˤ\O!^-wJ9;>o8_7݃y>iGxqOyЌnDE%֪(F.@0LKAus)KvHompy6}i}Kfn~;-;<›t`|INWthWx勵6?O/iog;]Ph3G<30μ?[sdo|{`N'#/uhzT UT `+xB?{WƑ/#}0J6 6 be +jp(EW=IQP$q4kuuuUY3Gwy:_K+H]O&]VC5Mu&]h@HCĪ8h}f.} 뛘p2 )pf +V{˹y1ļu2(˳$j0..6.{?ߗ2%F7\$\Gf6C lI_.Jg \Xڝjfmg8ȖCA[қInjp xf4B+鳀 n7}5l-0s^nRnȞ\E. ƻ]8^d@`录-Vw"xx};6\\% f/̅u/-rugⅅbe/?4L-,DSLjװy'߂Ȏm:.tq -r\ས[&ŭ0tnIQ%U7[]wK J&9ȋDLL\xcj1ڜ̵^;4ˋ5}>l7mf`w&;w9 ZCdžAjuA#=E3ӅU:FBC\ʬcCIhme\5b}I4gD.rWYbf4KqQG;v03V݊'$cSI+բPe (o >xiuzOl쌜 M|~@5k$: l L q;ş>ZuemY=[0.*$HD1 ~.cq!F)T\%ÜjYU> 9K?8nCWy(bd}"%I BCܠX`S{b_gb uZ?kV` ll:"0m\Z$_NlFy؏O69H>t]b`>^+ܘR-e{VWjYNL҈RMuQ+ѫzCM_m;q Ï?`T|7;խyt1¯on'110kyⲚ[gs;$JQmT9tr>jI`:Gny0|T'8ъ}3l`i$hKxNMG-odJC!hlc׷OF!jdT<+Ne+:W qb7uo.||#&?8x FᤍMo#@=GCCs0^34UlqWuة *ϷݷS&5G< 03+RehlT⚯UA4=Z'Dv5lRt-}q\0z:v QC<)zP@+o Ok2=D۝7g7/ⵈQ FN2lN v G`'ck` 3ꐳHsJMjuԏMqht7Bx:12&% PF0e(HIMƭ9cX<-a=#&LKQ!zG hSleIDXK,De$] . Gri5!㙗@mطnm'8E.G'ts[!'=t֧߭V /c`>j?V\ֻNcEP7RzҪSxz w%.mo[jm~ϯѧDڗyAZ9n;.~Z`Ԑ3صۤHe9y/w?e9XJ [U-|$E׷Cx|ZŸPؔ3 tʌ={uT_7S 8o$>8C\HŸeLQ$3fA2ͤ\Z.:uJ޻a9kLc<&9e( h ѱ;3r6tsWj'F}yrPSe0{!7T2fZnm?xAkroTDdkϫ>]̮麶›w=}fuJY r%iUH]zc"ZmJ'@oCx4YIbjaNҤl(kvUY.25=ϵ\hE9lj߀a*|5;㯫uʑo zř5onz~}Ӥ<_Cs_5.!wn|O{s]y8+70ؠh Hl,9~MF{l/xw`*Ϡ"XasP_ K^r~QvLmsR<*䢤L Q[PێLo[nGߜU7_;W<Vz/}lMV~gxjug3ǂ)2h{9j d!;s>͵2GnI.?Oėbq;* )ngÙVSPPfyV^FuzċXPE9oֳ F)l?ȸˤDv|9E_ĝcZ!Hw'O 7?]HNwW .rw x

&hA&s;d{iL!X/c_i@Q:H*XR8KFbf}.dKZ/iRҎZv ȂVaƆQYW՘kذUņmInyqoa[R+%TlEdžUmذ,}}ƏH]E8uXPK0=tu$WWP]-1VG"BʣQWZ]]^]BuEbx6\E]Ej+t$Ղ3NM_'Ѥj`68wxqQgI1X}A.<"5 h4195t^MB5-P|D*Ѩ+ W#t,*RUF*)+TWIE!,ѨH:uP{TWv3`]ק+742_:\+3eۍVIn0*W3Ǽ&S.`&N89Y֝s$"ZkZxíP 䠾[=g |JsˇM'N2b7c6߼kz,o59S^%E^Tgҥ,54Cy)dU*#Qm6HU(aNtƜw2iG$Q.Kyw qe\MXhy[Bϝgvgi|vV 9{Y@Dv"-+d;Qgt8عqfl 0})>_icCiǓK˒"hlח狛"|QeM3M\)#"`1u+Eʘtgo43d-n&Mo! i]8^nPatk)E Z+NQL_&]Be{\"`?\Y"W{^(^X(f^&CқZ.N4ʤv z:bّBE=z-wHZnk\lwU W*OKoaFH bb P5Uߴa6nc wΦ 5ŪjNKh62=R\3Hry'gGL0J)Yʌ6.qgj1/G-e鷎Vu^=QZzе@׼~BdAWH倬₀,0Ii^\\u8a I,5Qs{Hރ^,iyK3A:- "`2Zrg*I,$1Lqp9DŽRn g2H"qKJ)HIH! g{&gI%h^i@5aꃇo?2O+Ҙ+/:˺j~OJR` i>=ٻ8rcWXdE؇\0Hi)FZ|HK-idRwT)X^ѹBկ\R^1MZ󵄚+/sIsI=78#DM//8.g*8Ѯ"X)2BBT6:GZ<(-}NS0XcPS5[ް爐Ԡ8Ԩ0LoOP 1D'm,ǐ#ZXy1 vLgJf~+ 'gK>73Vo-WauIZͬ3>2g`75^^6VW֌R ݜq=2pvN;w`'9>%SېA=)S{Lnks}"Xjǣg=|mкmlh>Z7Uu5F~&$ <uEʥlזb)b"F|m,ΊgE@2'Ubm&Ζ^} JЪIr;Zgkmq-Ə2^5^gsV' xŊTQޣ9Zo!Xi!p1iH+.|q*i`Y+Ț&$,TVP'(Q \eKqP 9^XK6/mq M,Q$Is%[TEϥҨ/Im )+kHso,_%L]WL+w 4!\xm}8'RZT,7{-{]ś> |́ '8AoHEbzO\eyкyϿw',䇲g& Z`g ǣcG_nEWܗU] L) Ҥgj=9c8t|?x_C\0CLmȗ%Q~7 O޿kT&"vi{QFYC;3'2{b}m ys_\dAt&zJ\մGJW;/~yPrF /{7ib_g8NjŅ0 ZhRք] MN'?}92Ɂ =uϸUeYtd(Ytr|uG'Gk^ V{]d_}EUӶQ6:>9=ϭӲwZ8};/wg;5:щ7v4Ggwo?Oo2AFL'£EN}][=T߲kv[֋ZWSLe紐🏦ruvYYd&T҈Q{\dh,.; 4gP->rC~=v2m~ \׿* gńh+q=wkcBGk uj8k9~fK`w]I v_а⬂e影co|YRR;@NB]>t- #F?"XkɪتRVkykB:ŚS% HY Rb)VOB)LS X@W J5`bQnp`l]78-m,̋K!g=΋LZOѨhpCy9Y-U6[bb޾ӳYh;rhҹM,6 j?1_ 㭯^,؞OXs*yƱ]mvi!.[^yr>ioyws ͅo[>xeXNOL Wz͚j._ke :|{m:"ong^Nt?f < {(6jY+6y4(;Lt`ĔUc(\ͥ Cp[WMF?5a!,sP@Lc⢲E%v&eAq mlƂ@b@0͐dYF2sKzFK-mWmaO`| l˺]v _`QCbk)ؤ%MJcob;DayJEne:7^;q!CIWEj')(ccÄyJ*[ƒ;z-1+*[0ģ f$"Vi9%LB,aF ;gԄb|WcP :8QMߐ1VFs',.DRC2s}Zqg/-v=mNýongNj3+ uM?Pr`ef#*ʁUH!˭B#PM$OZ#Ԃs[[Jp!D`%} 5Hgk@8ŨjVq` L|/ 츝;Pjk,n~N1_ #|!fio_O!w־"5Pv!z!*ۥcZLa#|L|+_g;*D`bi{4t5vp]I)$").95l%YabT( $"i)')F"( i|ˡٌD=wv RD|M8|y3GW"prq~B㈎^}эޫݧMCT-WNQKF΁$? eQRm*^"h(תT(mҭoX̟Es4iPpո̌Tb`Bd^L,5梎U I`p6F|08OEO=g0N>u- 6\9[U¸sc}rR8M}E1N蒩^sĺsw]]]pBxiI_}dâ @7X=v7/o#>;S6SnAn&XU:軀:q(SQnw,jKҮڡ]74ЮêPb.>\k50DȘ\܃^ɯjҭ,7VeU XRYO"HPH 3ʥx,dΤ<:XXgeN劃ݥڃĴ_qJ[BEJ:F@-E*ΙĖX{=dݱQy0!@b^m;EtN%5NryؔW4T.tKڇP+3KV\TS_cDM*T*,b{1֠ j(`7nI`)F7ݲ1ۘ#j+61eDAHl~T]P65dmllUu7tAB{}hp@2%C̸>9 wסo)#tѲp4Pw(jTzȪGT¹KO$P*Cesl9T6fIɡ9/P*CeP*%^pS"P*rl͡9T6KnrCesl͡9T6P*Cesl͡9T6fiˡ9T6P*Ceslv6D"з~z(9JNF]4BၱXjT%hSZ8S@+HJm/Ě2 G*2hhAN=h}ME"jܰӆXW){L3^bd- uy4'pOK)- #B``"RSFy-a$EV DU$pesoӲp`GU`^Ov i;1HbZ?yҗ<~9g/U`,7YnRnbCY@V>+R_UPs[-hYxE IF_嬗zųȈP^ F vݮtDGwo賺Hrs&ޓg;!{VG0NXg?VľĚCDz-$vw6E8$52-2jRLEgV9XKB6@|TV̼" HitL*"B 3âҌ;$$"t% Ll6cP)6BeZXa9#&)=}c5(h'Q>D _ݎ\RMCDmuk:=|Ea1vK4`᱊إZ,H Nǜ}cN#*ke:HZ]^ );3=?O矪N;T@.UR"4YOfa%/C1;JFˇAX"4S.`S ػbŐLg_UoN೨LS`ktr.SUoO)lW'aٳ_ʀ@\:$Ux\ق\ 6pg' i"cn>?00CSp[\z[mryEٟJPJ#J5Iw'nV0dW#!$qzG_nwŝw.~<~fz1Ûe`v zA̕4[stHTƣouE7~ѷ HW]Ða8fY~a4'4IGvŘt5HQۗ] 𜚏ZHuȻtSn|P_@ELm{̍-m} Cx-arcQƺӴ-v W`'}&2K;+à z#(wRWS]4_#/۠SS2)ÆG,(%5;Jrϼq:xy`1.l=Hy#GF` %2Iqe ZE J 'i4™h Ґ/z:*߯xbG߯e>|vD{5g4~wZtzf_Z)[ `0Q>S)VP*nEuy/yb`\b2Ncʐ󝾬r볯+}I0MLr@ B($<2#,R{‚S >}{ vgHU>rB(%fޖ XtɌ}T.tobp($!zc\HŸcLQĚlL3^e&#QHYu2LwZ D佖&Z i87rSLnzz|=QnBZҚLc$ 9N ń'isȤeӺ B%{k6r^ϋ}dj{^j&ڧ3~  iE:&\51{6 z͝5on{m j){,e-͉XG^.9^SRq y2VO Q-S4s J cZЈt\jCXtF|hM;YHB]5`V$ e]:9^m?ث[(&0f`jk~l毊'-dr\7-Ɔs KlqFj1gwU-BwOYPtqBI~j qe\_ZVj>gM[}=-anl>+E+u=YiG`ysΐU€Iïpbr{ܯocl ԭLOdM'PmD k0\RzV *~Mq_+hǓ_|?8jÒo$Wd|*r&[iVڷk8+>0J][o[9+@N"h ؇Lcg{]4&Vl$҃[<ő,EbVv"_8KI4b>& 5Svr :d,l/&c:G0txLmc9;v3w!fay gًM>^ V2{ Hz{{'Yf*>5r1ض7ySq}VR{mǓCKJטt]h")#tyi;{i&{kFo?QI\'7ߟa7iگݤ )\y¸%Bd-c͋f-s }4a,/|zd6!$e=y&gǴ<}I5 =Sk]O]`;?v@>yr;O`O?6xγ>?n8x};BN̨ǿCC7ng ٌQ+h.T շlI'lPRJU4$toeɃR] Y˺;G{ISlvRZYL] YBQQ^"*60V{U7Fp3q#x)z85˪-wy;\<+NDZ0{*upRg`!iCWOmOS|IQ+qhxNutSqH&/zHWX.|w%-$~Q~(F@94E"Mv Vc(N5 D&aYM޹倭xEx²]cE]- =KClVOhOwg~6|g ~ƏscS9tƥ8 bEe!d >/Y zʰFb*a-]}ME}}>|UJ&im-meZ轂`k8Ѽ&k=Uh;l3`" Bn{t9B> %OcVe#./pU\ǒ]\%:*Sv3.f"8"sUF8sUť JnҞ34W99 Mi_60S)z}1~a_r#pn2xmg^ ƿ]_d90fi+3?Q&vYrFv DލF2r14խ o7kDSS G3T#*%p<LjA}MZx4檊k\UiRճ4W,^g؇*埧_[d:=4p ^dt!INd$Sz1ZME@1L)-G d\TY'Swx)v0}(8v]'L,Qc~Y0+:gwMdd (k!cQCIۜ]QuJd(eP!843k]! V IAZhy]hYfb[xA8c! ںmۈ>ĂJ&%w;v~@Xg&0tu{A.IMZ;IcT aOr.:.D2 ]foZvDܥ܉YMD!S݈buɠ#[.lUJFau/<_qF,ELLFa9вS H$aW U FUVWڪQXk}J)X(!A`Q4?,AiDvNQj[E8>{p}&ioEHZ+V@6bJf'9SGdIN^`h @昬%XZg2'S Ud,-jVMVzH0;/%_X8ew˿:ů Z4T) (5FbLNGѳKVedB! Z*O8;O*doB=r5ZD)Y+(1Y(5G̋5ZGHVd4 c*LeD'QC P=d8k&Ά8qYׯRT}(sɕTSr*mVJ(V!0 ɀxTf7ӏҁ,A8вH%PكńL|;hDQ8SmW﷜]Ѥ,^Fa<〲5{I'B>)3rApɓhd8yr{w-Zϟv:1 |gY(O՚xYK^9eg%bI8.S&I 0ܙkRmupl0 /tB[v:񡀯6l,<M"feHx#K҄Lwd74d[t0qv\kpy=rǤ8%'5dpJ6ydv@J6#X ["ڥl!`"e۔e8|%2 pYH/,b`IY%ghW@1k&zjw ҁM,gu^z>)Q=+޳O;vlQz4;[{%B* ZOQ=IƎ>Ww?~ Q.4׎erVR(߲3DNqQb/ &Il(" `TSB4AՍLH'~9d^H"mcJ$Sz t ZIHrꋂQ5&GYO&f@-'uqĦvC1/Fsge֜f~ô41MƢNDsM?"]{oG*[`GCq:upI.Xn #T8mgOiHl=2LWUE@Tb#,8M\Jt; om).ݠi2Z lv %dE0oƴNQi14ȆVg ^ Pja]RkR R _RD&/H:K,+T` _ce9DGME2̾Gik(Fzl>_v CّJ/$g$uJ*$`xD Oit aRu) wqRL) X?:%( ,bJ 5V1ICcCVqcP1 "6EȰ@(cZ)"V"D4Fx Œ>Ȍ=RP5Qe᷸p7 :JJXy[H! D4.r2Ѧ3 lg*䕤 R ÙpLC%S 4֘VTzJ8FL+gyO4h FY;  hʽ380 /ȟ3ʼnZ/rrs; ϳ&j ~Ӻr5vPuT} `u b:zQ2mJH Z(;݁ÀVND߳'oK'qW?чI!V5J\LjO|ohT@vx7 ( XHiG"2țhA #(H6&"䀂"+i;(N:vu}>]­v2m^ҜO~'I]ߏ6}%#߮'{K]M'7')7;b흀 |xJ}p"C*тiNPwf9se;|D2;xbYw"WGN(Y(採N&ۮ^9n"IKjnܷ i@d6K24zN';%-4crۜR_@zaII%$gK,Hݦ-L4PebJ«,[k(jb91ޡٰr=n=f6 LSie jZmf9 A6=fUV]q` 4].;ybȁN:+ZzY8_3bv/Eݎ7$o+ӮGRArbZzn jAJB3-/^q)9I)^(Ve'v@?xc⏋9?6t 5#l:y]u({>~RHQCP#c)j?$a|gQg*7ԻJ" <+4oJ2 hu]{!;@;h zSvUzΤ~:3j9';KxT6:¡("ie~,+"mrVb&yʀ.¨Uڥ,/FYJy>T,w\l<:ߊX|`hy-˅`uM)NI vA{˅S`bN5VxA 2_uP] kqrS")9IEdUSafXTqXD$FITy* Pg5^z4am4g`ĤGxa1y5 h'Q>DٱQؿ۟kiF&jN%$u{b켗V]hcK9 %X#. ܜᒛps]>䢲]'FָäkoJ9iB0Fa?٧V E_e:E pûޕzxסR33BU={ D{[1$L |1.9EgSwf2Y9xshM*X L.:mq6K +9a)'8( '|02>Tyί}TJjQӿ_\̓4 XjFjߝZ<g 1D;X'`9JF|[zun\g䪹f=$sc>qpU-]pw*E`5Ce$Q|t$m> ,2;ZJ? YLh8Nt9ӟy8f Z?!Fm{Vڧ #th%Vba4)^ lCp%AF=iR;ɟ[cS^0(nNGLyˤ$SXDJ*=Acܻ;=#SMXT)zGN<b'Kd6:('䵊. Og6HCbebĎ_A9bvDg36߭V+b]qb&(r4S>穾TԕG -cs]|CbP10 .f`` P(2tNŗH/! IUhR1qV@cĜE N5X4f9 (ٞo}Mză1>2blm+|8?ԻGn>9`3o C,LT`F@ }TWh z 9obp($!zc\HŸcLQĚlL3@ AHRVa S띱Vc&yeĀ(hj4BZ"2ͧTlI7﫭:ߜ>YNZ[t!SJJ]q3l ]7]n'U?lҍaV\KZi,QWܚ6'V]Lo}iԺ[g ]R|f.vnzў~\ykww~6ߋ` iB}3n#gߟv?jXQ;)؏JO 2(PF@WrVK̜2G_]F4+\v,B2u{6J$il-pIM_~7wrWF͟S}ݫ;h\CL_ng&waY+ {5,"]fpc~t7z;}>Ws<+9̠³D_3x " B)0m^>Oב9SZrecy;Y5iJQHYmu,au#0$5IS1|0ar{+%R.G6r6hâZrǫS';A/.?9Og&ӷ^VQҜZy|JTJʂ*1/<^VGPXCLj wBQ(AE)&zb'8>jK ^ȌcL%[3f#gf\R 8gd.ܾlWlr6XZ2(7,K7_7(`04藓/\c+Fr:Ee4e'@=d#zWcSE#iHQk %ImR0+D{CJGt`ZD"-Ycg#gƶ_0.OEkg]6v`s*g Wmh|@N@BJp{"4&ꁳE:dlYƴFHPbKFb$a=c",iP΁DFby2ֲ}IZ'iR^@(2͟D.f,r@{sHH0#hTV3OByC\҉ओxIE`ׅ9&r6ȹQjEw9lVJHH!$ -w0wzW~"&b< KYS`Q}*g/[A2ļabiZ BmxV v 0j4BkM̨3hQ9b0XJ FPB2%89%Qq RX mڹnWteN˒liY3ı:-<`MI䂱{܄O$,xVKXǜJ_B^gzɍ_[$m΢}L68@vy8 Hi+%[Y/p)./-r˒-`:#- #Xf DdADP\2õ*xF&AH12Fߩ8FAhkO PQpU g`YN eXV:#gÜ҇Ñby 1sH8{ JNg2ы65ыؤW8ި* . B}ܦ@ lDB5rO aQ\5Q7xX2߹i̡RBւS>Dz`f.d+$ǴC-% !U֡"[u m͕nC[%WzfP+.WZ,WZ[-`!NԦ+Dm zԦPi ڀ+<]E/q<v c +!*^!zB!.<.a..!(.!D\y]NP`Pd W&Ǹa)A4Gh@ kF*SDQ&IBsBX#1h7Vrj0C0Shx T@p%=2I~(1 P=B'oWch9OCݓ|{'Gִբ W5&Ѡ]"vд \=5z8hO[cѝE9~ᅥQZIү0C(yY)|e8~KNczp_pTzlQx>9* ! ʹy .'SpĭDDeSfQV; whIh$4ό@HZ%Rd]71팜īrkEϏ]Thcyrm:/xmLSu &, +ާ~QäAD'b`Ic$ᝣC]C3ihI3X䉝^sr'F2ng 阦\wC;WЋ.;EMUӫQN5!TY$Q)ɰrV*!JP6~>9N4?{+ gqcp2/1^NdL>$7Zށ2-x͗b;wQ[`N̛KX{yLj|Q! J|9K+ LM@h0mFwoۖ~6Ǔ/{+x 0<NĶ@,*+k_Fݶ /j%>['Ԋ֚R Ri.4"Ƚo..@@f9XKSe"$"u7U[jY4En ʟ>(F4b~>W3A]~,vݦ'yOB јRP tV)KrL0,:!oJ$2q@f{{{vS[O{]v6g>aX­&rn=2<9^h/ RXߏ^:K^{>{qVYersr#! }ﴜ/* ų JON[#ZtOPDV^z/! % =Ϻ:޳`BBq1^kZXzSՅ))eR4tagّB_x;ĝ;$-wo t,.-4i^?q37KVJ=c[87kZ-;Bc]k[Zr~V7PU oG'썾<~ı4$lA9't>;ό<02A,k|Y}_M%մ L!%7lE:5!l:x],{XeS(YST[.S<_#]n~i=h*{*XYV<2NDߊ&n#}^hr|yrɞiȘpA 8S=qL&4(4+yDSd9dK!wLgR/H Vۜ]mN-y(Z_t2G3{-JQ!1Mg~V}ܺBrQ?;.Ü ~#yU GB*p=lH2qpϑ,{_uo軬*SLp%G)GESRGigl')i9Bz\-xC6?Ji!sq9?L<&x/.y\F4W4''EhoUdj1)(FWѐ?x4!SL]_NHWnaTy6g_=]^f_Oo>zz6*0{H ' }K̅m8ΣlnەMx2zt@췜Qkp:ҏt6hFaV'ZLIh,+>n]ߎ9N<]3um= u ,5'WNIq̏`2dq@30Mh#A/ Ck4bh \ԍs7K0?^6c YDHH ZYTB#d QNl1+9q~ ]Ŝ 6XYMծe̦ܿ[/Q! >@L# @eܿmO{NĖ)?zIAԡx`jM2\D@ЄN6Ǭ ^'HJ"ȜF uNug?L&%'IO@zKU Ü.^cM#ISlIXؔ% MQbқbEPu[ jLI|t9Ei^$M sGF#GeD(JTQEA䨠QY 6 z} &d PM6*O<2k5Ec7ў uMks=t͵zrk(,yyʧ&6_DZf,¼[nq ?<45f˲?Nn o<_Fޜg/?yIY-{nq-.` mk2Evmj$/U2lk>]}}{pZq%p{{=.f_t=T::"(S$Ve(E2瑉̳1md{\..M"$<M!i9@Ai2fTh"6flh0׫1d6GfYoYR*Pl4BZkE( H&'dH[4F0(ٛhJFKXH!ɳbѓ#E2 *$J|a!Z26#gdlRk+xZY{-U~)o;zg?/'d ]H:DI! khWÆ):[.P?!(Ԗ`!r {\Q28[b.llQc݌;\ӊsy,RwꢤL6 u ii5Uh3 'Զ$$0F*D1&`A H[X2HGD*(1Zx"gw_  R]핷jMy^\0dkZQd{a%8xjԜSzmxZ`qw}qo0f&tB 6ACL0da%)1A| Eޫ@frbQH]j@U(Jtm-BTNւ EK )YhPH'%UV-fxc|z}L"=k?v"GmVn:5;Y)5ޙ _e$6h()%,\ȷ2).ytR iu7M46t6;)s^Sv&z,Cd/ =D#9V)E{z-p>|/p_1x,IjK8ՇTCP}A!:Le!>ՇTCP}A!>ՇTghnA:CobġYjs |, UX>%_环ómd\w'+#\&ǖX[ّ"9A$%'e^IDN,g/0I)fz" SV%'[.yDf<}ָ/wO=hP h Q ZJc6 `8uJt<)bB-C&(rg?2FCSH"mcJ$!2RN, lZ9 *8b8~hyM nYqmGk7nfowK}7s4SyrX}EQg ڒLI&+4Jo.SxwavNÚCl ;V/voXchW5JHEq%j)O֣8%L$AXZCiMѽ8{D&h~[)E~/azq5ۻ~+6ծn9JuSZU2;@"\l,$AYZI)VQbݱR"i&J(Ё\kEkmʪzu̠ SDXiFΎIsߛ R)HHt3(Hk=XhqG@Bѫ )llp(_"tܙYr,ac ((%(~=$BNL6a8/&I:.*18+ǘ1I ؈H|| ;Pd0.5؀c$:VګN]8k묭  /z%ةSe] yJWYT*=TYU։?NϔK V:@Av)ybz;=} }DzؗXAREwdrƕLR+ T@%hҊ)#fuMmc0ZVdzFV مL B d&cVzL(6qDY S=\q"1E|zU&3*K "i QSHPj&g.̭fɈC 0χgobƄzr{RՖ4}y1É g^QieY#3#3(TasYQFtBU1 匬CLnf숳W~zacI$,U9R35PTB霵_s:PYh ? /r/x`^!- ΈT" z0B{=[LfVO:=xt&Ud4 fYKZ2C)* .G#1ͳ'gyi4qke̔W.̬ӳ<.I9 on7*mbL%&)bXJy Rv(+(L4*,ЀvjzX_X׌E |;6$ X Qy,u-ZD̮\H[G IdC7X/.80a^cjZkohD‡U +컋CO>Y:N_eݺ I`Xձ:#п[)ܹ>YHŸts> ;K-mk/?XVC_J}}ֶv Y6fQfo~2 ŸY+X4׷yvXHpJ $U4wM|Lw+:3M7,̔l$,ꤻj#ly[հq|°[B z]Bj9Awpʹ΂;\x's .1C>zu$Ԍ"_kY"M%Knu)eSBQ534dbP0v%VaGjL 0qP Xe.BQ7>3=B5 K7݆*]gWུ~C~淡;߽}Ynзc{[ꦞ=aicƒS]M0wMBAv [Fo{.]J iH4o^LjJѢ:&zGG}8^E~kaUvUPփ(ku t?Ŷ -!-@gGZ* H҅N{$J$fxDjOit.j3!bv'Q?[vL,k8E&$GGeEL)*! b(ḅ`*.tW9&rQ( 2A[R.k%O܇y,t}BPJ_qB&X`9sW_[;l 'u4119WXeŅ!> Gm:Snsd!C^IJIL 0AQYpf;3 +gODiN9@'j1#&qqV0lQV!cleqc4H4aK(F[ 0R0#4H13&*͵Vˣ}B4sCծ*PM3CSxu2ҤL{'5w_3Y9PHݜMByfG֮ {aM,wy\ģ E h3 ,xXy |}& rEx DJfk跎;i6{'IBoX|O}ワ f<\W1"Ͱ6.BdzG5,J#J5IJoA4C"hڴA ։3X>dR_m]-F٪}yjt6<0+  =|bsz-ճ$zW&n-ߴs!&F[tV3,`7 z0bZ:;ђ%HZ'\T뢶95uN3d9d6!q?{S(lItN}\_ӛ|O'{{Xq=0 .E/Em=ܩǷ?^5'&USnWqiKPP~~u.dhNBliτB:5򓯀N?ܸ"sX V)rĂz~^X6^wȗt3ڑ9}S=$Q 7ΛG,k\rV1V%ml Mpr879'_#2HKVXBxv z3fhri8lccn֮ۯj ??{uM* iu<ak|/6U{X^{{ۍ?:@kƦ^vҦx<}赔l,/vINfn{V2~ ?9.7Ś]u$XLjANU4i2 Zd7\ 3f4j|9#,RFXp4(Of$%ZLwxMzo 7J2q'BZa/iNϔ!š/f߆XXcmpJH(^;\0 eO$L,#\ s!1EFkQ2ͤ\:.^LG eZ30{-#DMFS"8+ʵy.'(F$~rKhB.flIaQ[@u >@ؒO6̉7eeǫE/GM[إSrh-q4 =qv bx˹F'};#j {u};_S$\~Iy~ipj &\v Z$fO^]5Tg1 ֪ Suį޷!y[͗ts9xv3hn >#V AYM("5c\vP-^ZIe/.C"$ |v&,j` ')竝RG"-G:X)$+7*K$-nBsHkGhsd_UV]7WIJ%z#w,se{ rӞ*IIzi8#sVzoUw.Ԏ+GWIJBz#mŗiX1qvh9V!Fj*ď9=F/?}U UمטTDp00OϾv3WNelԽo3$,M0^84`B'~󚸃#35 ِih4\0s .z]NRuc)ѪJiMw^U%6Nt01㣻Nm\P߽^)Z,v(1ʏA^r쬅Q fZKj1k+];32ivۥk$R|7?~m`nu1:?EBV( ZP+jBP BV( ZP+j[ͅz8bbΒro.;&i7}i R%qJ.DzX3>fX cav,̎ٱ0;fǒ0;fX cav,̎ٱ0;]lav,̎ٱ0;<qY-H,NNg{ȑW6Y, fp;s, Yd,y$9_m[(vwY$",z3.^TEb:}ξg_FާP#W_%O/,TG[k)Dx8c 2m{L7v/LC=.pWN6oOxlwD}z/!:EB tq$NyD1XNJUV ,96ґNԩ)-LL|V% L'T6ʠ>0AԵ)9w[-â %&ZցduQ'ﬔ%"tU `{n se(f6gRh[ª[XtSFu1Dכȑד̂Ȑ`DvG؞k=۟jikСH睭}@~&Jlt|d!L(^hM[AbWsxK Y?U }ը:; )`<so"Gu }r-ن}U5t6s0ooGaL<`U_ nEFyx:Ѳ:+dUnL J{'b5JIllo6gxGY[x l0|! ^B&a]LxN. \a&>,[>,*^X2rcҪ W+ G`p;˨HP^pNd=qF4̎j?3]m9E A*ZNFKB =IM)ޠ{ꝑ&lZQmwGvlQ  QʢtV4;.E˛46Nlr%p%i-1M1*It9KFq-Pmn`o lS,yeLvG+6)'*B&x\XucuMΩM Q32{k!u<,OCܳM$* %Uq#)i8 dVcN6Yش2$nM&tHʇLhm`1:H2JKbuȹp]B.fƜyr ]7ԕxrLo%vo~| l^{LYc>y>PGE#DgF9ʆ(hڃ샕7gvn3Yfh3 P$}N*@gU@a҈㶐l)U>622t"ʜmДh\V*I ,dRڨ1b0VIj2u;9w?>'^_e鄽)̹~7A.[:E +Ҭ|Q-'+yMxgk~W/4RSy#E"?QWDάw}->->->˭E6< d ֆ\PMhJ&j)aq&}4*DA0e|uN$0g!,*2IȔH1= "dkBtch9Ck;rIf"~<#&TŢL"B IP3g8ғe}-jgPLQj̔2';#v[ut[mÍ?lW\Uɇqk嬕y 啯=|& .Lnd[ыBޒNSDU$@8ҝKw/%UYu[zeT1PM ߎypQ%h \(~Z"nlA@xTaK 9g6;? ;?mw(~[m$ &6OR)EiI(`-Z$&2A',KZmAZ҃%Q<1XIXekF(ȄZ G2rvNhx8pX}@v]+rn|j'0h UʋJF6EgF2@"BOABCl3KV6FEz AzIIa.i=KTX(l h K %Ф6I z2¤,.&JHNdjV$b&&B)M҆ue{{U_x7eLklFuCL|Uh=XdОʖW^vIƦyQqGߗ-k_u;ڀE6oہϴZGswax7!aQ!9]7M.֣AhR6KS\Buxw4V^]p~48+ڒz=jޕ#M\'~~;> fm'Ʒ޷:ZqCGy V}f X=ynUӯѴr#n_bn9{^:yH뙐KiȪNHFx}ɩ(kРSАR%y峆F_˱師sb ,$v%͌*lt% ERV}aN n.i]r{@ܺuC䬩O_B֚c^ ࣇ1k_[Sn8Oh3&FqtU 2L!lbAzبg8sGqfPVK1S= TJg)uNEKZ .l$9\YF:y,{F#a@ gLFqfg!B/r~|s%7$B=ݣ(o=T}zjxWY[ t(;ǫ悏*#w<? ]j|Ԡ*Z, !>hUbA{2UΝvϐFh]UlMcHtk뗖XLG4GYX y#{hs(Xb_x_샳v6ru,}Q$͡UΟ?3C4C2͞|U]]E JZ3",D &R/5e7тFQ4`Zw#eLD myky;騿tP}K>}FC}ܹn>׍̧s4狒uEkWɡҟ?{C~˲d9J[}gYLKT5KajL4ӳU#iJPWS$ked>!Y0`N0<,-ru~`BBy D S Kok0;ɔn"Y&o< iAdOֶdt+2;ϳ;&-Ϛ)mw4?Y@RRI]97ޒ R)naE=ӴHiYj$4cuƜt9ߓ.`:ވ6gxz.A,P3j3{/l 턷1e5"Ǒ77x6 Vu"t׮kr]ܗGEyaSइ]H(sPT T`Ҙym~{MQIJB/|Nqyoc >7t GtY;~SHYSs}W=_}ޅy3}.#v| T为pgtd ί R6ud9L0+(pb)"@ F/pJÃ&|bPYRd tKR eA墓:?F'VtŔf0 ƚivB )90xZALT|; !DSRW`NF]SQWZ]]%*iXEu1:!udUKNE]%j9?vu,7QRW`-OF]U[_D%Y]AuV^Wn{Oֆ*:|XUq e#a/8@t?÷d^#x9gx3hi1ROivJN0A\q2NpV=*1YMA5-(蔜D0;]EɠD8zu"7$\RN,,0;u%TUJTޤRHk3tKSXaXIZ$U@(Px0 `ˮK UGO+@Շ.@Lƫ«J`{ǫk]wdPVhA(pJzq*ն#Zw@볳hvuNYlGL ]>El!~Ss^E>\aR/1",D &R/5e7тFQ4`mLDxO-oOi-jiOO7NfEer۰}r=P~y>ZM'&Βdf̂y8\󬳄+ + }`(#^~hZXz[ՇItQ2}L L "{}$͠SLI˭{JnSJ*y\zLq +ZEH”W{&1[5deaظ01{Lg{f6LRgg Ԍg&1~;mfLY)e&Hqd:n0 );Bcd+jVnaCԃrBQeq؟8!6r9paǬz-R>JŴ,&j*4f^[%^S9sRiY{S<&g s[SՓ^7hBOhژ(yfZUaj)5NP{MGeD _ލ]2MCDm>u{b켗VUhcK-%X#.Yœ}aNցή)DeNQīԸ0ZƛRQCsXSykH|"EU{ns1珃+3&àgQ`VgͫRv8u"iHWJ3,*R`  S"r6EgRP;3en9 #]4*N[lfrTir%g;X\dǗqci?O~``tn05FTfJ ]\iTUQI:|w\=c!6 ݂faTP~o>~D}?X? ̃K};H`$e?~e[s׭~[SoΧn.MPP~r廱~XpvB̽/56jʁo*r)ZD.6Ez-}l}ݍT"FL#ڀ}5[L[[Cx-abčդI.v@}T[`ssdvV;GJQ&ߴgJm=Ʒ))oaã@`P˝HE%g8A7ł-@"Ncʐ~]3wN IUhR1qV@cĜE N5x4&9+^o&Bý-dŃwY՞[X 7#|^+͗rG@(8%ۂ ."P½v8c7hIFKSh~CG$ #B*c"DgdI't\!D!e!x0k5f,`ZF hFs+%p`olu15[R;f系jkeebfwBwԧ2[k9cnRWJlܯLg=3a o?d:qnFd'iuD]15 m.2p3 0^wE 4GXAh6vƵ]ky|?*ZWFvl2oqusšy]P3 7A@̞•ؓ^Cn?m ?f7!y[3pon|2Qe LYo=  1 .;R/Ί첿M܊W"Uo+ܡpW{.緬ICţ/s6!'wJKl8Oq[ƫ˻Ge 3M WgyN 9jTc@Vt2Xd ڑ@=WcSE#eHQk %ImR0+D{CJGt`ZD"-ճN~Ź<w쪵q>[C.J1&p'B IF*821h#0<[FGeU3b^A9 ('8T# JiL7r6ʩ7Ip,VH}HvA⋥"iGg3#Y5iI=V lw4&ŪJ qF1D[1YR77"Pѧm_+01(+Cٺ(/Tv5JZmWYq4snGv(.^:MisQr_\t#"̸8M#uRo."1@&Tl.&`̶Hԑn~u-Ѳj3.>.NGÑ35{fjx1|'w,񦨸9!Ʉ/!A*"D2{/E=PvcׁK?b?o^WNrU/mept X=O=|Q1 zmg-"l;+=gK˼qjY=Yxg s.D2ūUߋapza:SU8Fn~v5^E@{@%6}^o4eo_ڋ訋{+֟9;_~޷?:q*ُ?ƽwKSOoz*5ڢu>\,y/Nts֡x] xxDc-cJ 7`22HLs6sm}$-Hq65b|v-H/^Cʜ} d#3aͬ%v %{(eS- 7 G &[QSn سf;9+؏QhJT&ue 'N}"d4xk p)ۮgDJcd7uX}π>>#9 # . #P2TLiL> Ք@jK-b(&u/'@4*T"2D@WG׷am)55P|=$6tfkBQj#3̹a<ݔb9_,ޮh0\>?ːW\I@To7x]TlƋVJq:DK,&FVe4X&FOWgX:G'ՃszkǘlSr )6VZZM`KT-F3vg}~δMED.='|bYW%G`3\jp}Xmm,m**Aa(#eU6+ˇ]O[wKLIwt69/(Ka9wdKk͑-h{a,*àlm0eBHZ ;;j1TsqVi ٬SL@OPZM$2G{ׅN+; [K1:7IJSO$>ۊq:@ԳN 4t 2{tghЄjeݥ:ǍgsG?[_-ޞʚbxJPr;qށ RRՕ2D1l"F v㴞L%Whc➹ȹr΀e}Q]5H2}r l,Tl(-0{x+"j_o{&mLڗ]ꔒJ%Nn[\0XԮ1`I Pf=.ؤJ6B4ws\=?kN3p#"ϻ$=" a63؄lB_k}|a E6e]Or؆B :A<0'5&>Vn7|nhQ5>y4zfbk BC#j? .x&/rջsVw])Yf)6\}䖕 K|b?P9YhhZՇ3\}]o;Wj0@2MqWM@!RU֛U U?%qJ vD;Wܰ3bPESn3\=BaU78+50uV̮#\rʕD{򰎞{항ª "ʸ0~lhѷyg`ۊᕃC+?}v߇P.j}49r+)Hw>%ƀr^p_Ak}6 LJ!եa5@2R^7V ''1p89lѮ?ibՠ_ǭA:c/7{z>'3_흾989.au/to^8ȱ9'臘JzCj%"oճvهMyɱ0,2403eL8 Qd;w|䲗U P-Dt5l10G%Ou%7!M%׭u~J[I8+g<+٥ur58lusygdUr^'pv+hJ nsWZ:yug Wݧ1_Td~փsY__RYt8ht8o.?_zQj ܆D \vYv_Wj w,BkΔI l2b̈́j͐!Y7\dn[^ Jԃ%w989CE/ۆG?¡”0G?@n%Ii]|6q>͡eXcZvh9X[|1>͋PY1@62̙9v G=般JTiD3IĆ7lC`F1̹(4=I76|8ՑyrQȞ19?I\M(|iN6/k: g@]/I5#9 # . #P `f0$!L$I  #܎!RM $Ԣ/b2Y%@4* @*"(a!*X{lFjXAJMMmtYT\3[82C͜;^HwS"2t~y8x6f/{{NЇYr+y/&ۢⰕRspVe4X&FOWguN4hI}Ma 5RN!&J Br߬Y-f|-xq||6%*& \4RTޠ+feDʲOr}ZŰ/7\Vn*KJ4p5u@0DtvPX/HYj!u]*'x$}y^RsgHق ƢO9 6&e¸IiQY5ٰ㾣SI5GhښVj :EgZM$2v"O"A~w`T8wJ(ZM|%W dũjVm8 Y{'J ǥj@uK@'!fԳ;G՗ =Q-U2&RS):9f97XMI"R_{_${@ mdS? nbBb2>WdZr+~t- 4[mV-K^9mI NCso0sᚄ6:9,r;UUtZ2&s! 6hmp>&ڢf#$(KɣF )8Tۖ4b5Πx$wRfB)w`y>/Y'WDruJ0რS>q>ur>~B>9ZL2p_ҦQ&y!DGK"1 :2A.=Jx+}JNAv_ )o[K5߃oTeKyeeZT3xBSA˃!SV"E>lb[*= NJ%b \d6! Yd8d2@(*g$5LFFFFe[.3XFL9[Y#Wހ5:XǨT|`HAA\.3b@|4V)9RRH;() 6-#? H{iJ)hEFCmu2:|AkZŠV NUg'I'wpMt(&2G-7i>J!(f= 6BG'7/6&I_np<fƄDpBf/BOE=0x^ M稖eoE]@!q2P.aifMH:$I'L'm<"H8Vb1.t}*WENM|$s*GHTn)h` \ze2Tz`OxT*t z[g;&UӤ5Y W2HMlbVD y=<3Ծ\Z~<3K==D[.[TATiB%$(D3 R.KRQCNE)R1%I:^vZ -h\X"+<*/jFC+@c:G/xD HX驆Ȝ7%Z͖'U5#a~䑳򈍑גhJ9R—tcgɘуE9_iyw_$2D!8W3W)JE+G8:.8*g6{0/Rf =ѡ\ Yj]ƥl>rX r n^q}8ܤmkpeiU) xN#c8e2NA1|ͨ}5`BmRKjZJ;Hu$HŬǀT(qR}K{%MJ8* Unu Bw8mਕU߫CYڈֳ]`0~ekYvHHHK0׏!" >'7W%q JJ.*Uy *QTvW#y5"cI%rIGDP@y"-5)*͈ R ";t`>˱k6*8߮![OSA#ALr͎K|ѯ') 򃁯 LChY]DI(֑qVu8Ù_ݖg<Ѓ\@Yx7 (`yJ崀i!a%ry3,:#5@bo(h+hwNʝַ]v=GϠ_*ۭG7Ws4{73k7{ex9ު,etrsr;qn䱀wZ._\2y?׈)Gս5-w3~B>f&h'wB2;xȲ$EΏ,XPX(CL<凊^%NVzJ- ~303,mKqYgVéNi{vUr;K&#q$-'fĂLT8iLL\xgfc{ E3uR=̴wl67t7癈֙{&) ˼KZmfE aNxg=樔z&nq 4Wdz.;eҒD@'}-f;_xSu7a[qR7i# 9$m ]$suDqm3R F$I4tX}%i3)%.;\p,QqNP ¦߫eoG?r.HFm^QG$pb:\_KL^1S)yW;hB4WY_)fѠJUr *@TV$ ~3hf8{3;ѷ{32ֻ]x]L+.v>/af#[-wv0/!#OA í $)#@+$"E/,1,P Q>piH#ƆϹ "˕x-MHblya't)y) Uk'n9,?whssh˲<,$vw6E$;-F!HK%#L N5Zj"ťCH/:~PmHYO=UdZKІP‰dMF!"]YP}v'Asa 9c 6;ge(wN *@bP#"X Ĥ;? wqT6tzPi4ho.=39T8wl"Mf~\ݻY*T-wVpyO^3Aadj-KTqFwcB=xY޴y>_W?O~nx;}~=%"'}Ӡq9[nvIN;!-#i^͕# zaX0R,Ph!G3g1kpp7~uiddl]YZs8zvCCՖVF0nhjV@E/muWDVT Dn(JL2:vV9ৌ+yאE%&"4VpF-Ie:%*E`%TIn "e?7P==jN:@X_?rv yMr-4|(| >>SD8%|%hLKVi9`0wZtuNrI]D:&p4%)6B! S:üK'-AI*1L 1{{K%`u(KYJ{ {7Rm8cp2kZjAl^K|ZR7/Ov?+.}2}}2==՗rvg \k%l;\+ •RCpEkv \kv*V •Z+CpE;W\+wZ|j%UR \)BdG R \sagUVmbm•dAֳ H?.E-]Y ޭ--t 8&#<`r_FfŭZsB/gpzlB̞5h.>&L&s.֊b"L.M`U1]bWJ-k!*4uџ`}yLײӯ6nw ?WP:Ed&ah7}MMdD@z-|j=Fyr;^ܳA)Eع쟐glId#I%mqMh KMxR2}ͨ]NjmY4# zG&V-5h泓I6ŃMIаbc&Г:C2sUx]J;'9 |Dlb,}2 -aQ!VynbΣD)(-{ХIb X˜;)iE<~yhE`lf7n4[p9`O/;v^s ,k`Yw>qt]!]?lH`;0bZR.,ϸR QH]:ˍSb< 9h3mO>WJJ<`p E&(eVڤ~5k:saҚT&e d,ǜ X\e ̣ҶnN_R92ĥȲRdcm3 `{n1dὡhds2D%&Xf%٠6eo8Hc6!^c/D2)f|v`5~2m,R4%z1I: ᰖnxL' |>p`TwR%DV 5g`18ntcqy>r,Y,a/P;p,E-s ̹g.qɊ=7km"u?[x%,$ē? T2`+3Z/}c &w80֧!n[ͶK~ԖV=|dv<VW3J3Bs2;z Tۤ-@iU}do3?<'OlڦC?OsLL%iv x$6,&@jﵶ>ȸ C< Ŵ Iҍ`A{"L"8eIɓt " !KYz,s1%Ƶ2,uL,1k6I*Rh*iY R Rd܅k̹Cϯ5vC!HCզ/-:9>,\ZtbJw﷈6 6{WQ82tQ"Egvctci{Y/ҁ7]/.4m30ׂ+[RDg75Wxxu݅r\~$Jn64LKMX뻋6X0tmR{IWp2YvY'6Dt@˻nw[ {:IJ!] vZ Dz®{N]2FG5//-JpiCTdKZv bDX0iYd-TP&D#s'V-l׷Y)Wa OwgA&|^ZcZLhQE+YV@p(UA!6m<2ٛ꣓._YOzpjx^WfJS?): ~k#JI˿6 ͇Ǟ"un4Vh fAx?glgxĩζG'hs"«#$Le9h@iu`Y#D])b9FP$bUδR!1_]J\%]GJ1^Jt,gE >32AfbBXHEdZN%-~*9bqs] /n/z~y-Jt4yPc4GQ#waGɖ%S(I"WrL"jt4B爒PkNh%u͔͏giDA1JM88hAsSNt:*=}ݷ?vU:iZCm!^ +x㒣?'9 L f9*YBƔ/4 \CӝN0Q{پ BEsb4N2q9fT[iHu;҃ PaM2%D]\Wҷ|yH?L Á_ސ ʪ_9'7&?2U8 r$.@ѕd}E賬))&8 ;4ZJ.Þkt'g{h/Ml4V90.0\bV{e3n6(07Q.b;=X<(Mm˱)]ns:D;ey \?[-Ҋry/.rUGrʁ~vhS'>f/hB=|]zݣ%Hv,]tiw181Gx'OmׂDyq'i0S梷&P7tWFj֝=uژ>Q V,trjx̆fls쫵Yږe#bEY(EitG6/%=׵BA†B5?ˣ0>D'?| />{qD-0O&q# VO~{JԪ[U-VOM|J_O~??Pv s1,24Bc(>B-(,mS],//޿^M{^o8PbsT&b?KYE7Ώ!#u`V%SpIɥU#)E\$.2ƑڿDHa3Xl]`11vx/Uvmv%)1*u5<:(2Zc18wvg0;=_^K[[jֵ\RH{6WWom[J17c閝U5N4қYxK?ǧ[8 yhZrM}lDhw wQĐ6w:mݱ~E}>/j|7/{ʟ"vu#ׇ=׻:Fy[w\8z@#75kռ}oߥo s'Af|sm,oA_eM⛃A{Ig@Є">p4~uކ:z-G~\Tyroi N?Ĩ߽zX>7?a˳嵨g[?rJydûb/`2!l32~0~F;b$ӓ0$KTUeVJh@MUQcKI&圃jW!'swq dO:c{>jGE(#UK1i>k)R8+2JCWC-0S{*YW.$  ,h.3PP˯͍Ea)d wӓoYK&=Qy` P[\/uDj{Ru+"7.N *TfSp"PJIbR+?-Ypquh9Xg7-9mg\43.θxoDԫM9!r:eE@ Qg'&h[xq1pq*xM;Εb]h~XK)Y:gj\{qi(<Խѥ ]`Q͠fCSR r>%m5ZgOVXr: :( %Č׬ƾK-icɆX1V pɨY!(f=*K-M?G)m#to:pf]l[~yGn$mgY pAwO3y& [6{l?1ώov)S88dprEyMF4;1sטq%F|ѹda~JN"g W)cbfƶ]6{E\R&%V+PcM&)lԘؒB:3nd1 g_N d~~ya#ė=I]u'tDC8sSR?[Yנ<[9g+*CV!$ !V2Xm(stt]-8j| ZB\C墹R2TKԈF(FU>M|'츕z\jp;bwvb,ɳ{hA[/ŝG?=* T¦x ikePγ_:l.ba]9nh`  r)yu%姽i7u%кvZWzw=rFSj`k xn= Rߢ+gXQ:K! YRIŖ1btZ\ r:dEbFgJ!>N R`8~ݷ9+^ hr.)6ɄlP|TfeEAIJqDܥ 2[t,ET)UAAtbd5ـk R2qI,ew+;=)Sk"SCK k5cuYc:4J)Q1iƜ*d`"( AXhĐ}P.dg 6ؔllYR]M}۶];(s6p3C+=(fN>G*y%z9, F?S& ⒙!j!m Ϧz6JĄD=YǶo'D]Ry=kKB Ơ)jA5zgvgb邡Sbv؅r\E-|+{gr/P+GEԪ`AFcRʜ H3\(ZE[/Z,8min;YշgȆKtܶ@)-{āEdJfC\S5MVʨbm'U5^fdQdih tP*tPA Lu*2s"lgF0QQ~OQTb9vT-E*{Up\Z: fzoE7@N?D"V.ׄBcN)R3xyJӬHjuZdJg?y۴[Y2UרrQ쀊̒ȠB]\Ĕ PLzN*RzNXzTu.=x7+K,~>{KZ 5mӵSbä\rTlm"""Ų m9"S3}:Z\'n?"ֶږ*0 y>'8=<5TGnru ŪM)l/-CqpU_JH$ZUBœl3L6U^9??08AҦ_kz!D@Af8䊠k57ީX5FXZ~|M!' 36myYW"VCtղd"IkaE$Xk"nyK@[p4~uξmC^#??.B7Coc9C~,?mfY Flؠݧzx>4~}Y/rs7 ^ź|byZKş_XЖ;ZT<`peŏ ZVKpy?^_7bS=*$⋪08qXԀmFFNYy6NX7߿UjFg9@2 I%4d&ƪ($reY0CտRb)UCt(%3h9E5@ѦVmkֱr 6S[fXg8{nS'KOT^"uNPZ$ SB!UK(9OUotY{*YWnu*b$hD=̈R@T/qXcu9ċ.ZOaxXi<܏;G"P*j~~јdɱ~*mI~C<99xr:dg+iRr"kVJl5b UBd<8$Slv}ے 9{1e5 6a=d4!cNQBk*UnK݈g9Q{fOZkv*JMBl9OI\'xVȭR16.׋E(7kamɄOSFDѼzϖr)'"y޹^ b)a%Ϣ֥˥}Oz$fa⥁XD?9] j>㕃c'R ǟm9Im?įv>"( 9jȲVߍb1&Ǭ@-ZyWkr ^j 2_VG<+E"+(Mv ɕWV p%(J'#6e\PgPoqY(Q2G0V:r5r$Ht ?ys KqZ?kG8iOW~aꬆ%:%NƣgȞ4'222:OYn &̦ca|ܮLBK(T}ZcJ&XQ"׉#x1N(m, ?^/D|<)W>zIbQUV74ܧ߿oTH.8(eo?~A7yzVi5ک:>.i $|r$׸}Uea N]2w˸c< { )=^'z֟y8f[?!FmzVA?1H~ߗ>!_~,h&(5$8#3J547ZZ=kvqajS溜xZuX !_HQ_Kg,q/κh滘/6_Dø=.M7ZXlČzP@xoZ}QFҕzkU4qܐ$w|%kUÜ{n;-p8o?@vwe -L,ETbJl\_v,l `ܫY\HT,%@%H>:L+tsJ]=\`ȮVЂ}4!G6F etueu9Ƌm@[e7WRv!PjW1`weZ?YpR)γ˥Qʪ@^(6 gӠBB)Wea^ .c҇7Wv_s gYsE#Ee!8*#=#IE F9 bhmN2xQG~Ld,8Ƶ<S#Xg,~/.NSʎ Ի:AoB튉" {ikG߽m^}p2^ 0M?)rܗ-G,4={1o ՜&|i:ĀwPlIx($9%BsuRTC"0ܷ=A=i(z}V4̌29+y+1dLDmgޥ8;cП7"솮gVx ]0`Yw]z=="\0^s=$j]QKjz}4Nm6Q_;uԲwnzlY-zr=?5wϼ ooqE=I_U$|u snzk|RDd\SUp:Pm!:] k%y,TQL? _̞~Kg+Yptb:l$N&/?*QDc|2R&i)Jq!Q:[JH YȂE,B0Om=Fم"gSz[ g6>ę[5pL}%Gsx*1-1Y۶D@Ϝ ĶL2 f$ RA1` F-Lm *_ΈVz0̘*19PX+|yp}1$䔒$M.1HγPk\0 hZ%I8yQTֻl=7FғPڮ9%^ \0ŽVEsa~bWt y~lYTLpj)gA3rtW)y`9y3^yTr<*ْGrZ \F˥l3X  ARn &jk ǘWtڢsB@6Vm9[̞H;֦4td_y _ˌHV^T6J-F+̭qrNbL!!.MğɨxbN*4rXdInk[Ip'gI*@$V5$.$/)y_c$sp,Gla%.U9,{i,2124cІLpYX41cYY`6[DC=$zdr9[e$1˂QI.'%4EuAj)MJ+",3H^q)gz =x ^F䙗a)G$Ld,hz,yKj j.o;̳̊g K' d%ÂhOQyD) a\S-)JOD`&Leؐ$^ &3F Krl϶VDB%^n+mS-LH i@mZormZ'է?lzYm ICģ=Xj*ŷ!ղtoqb<.i&Udt k@?Ha2N}RJl+&)xr V!iE1<&+;-S %]vxc<< \:ǒ*L;"d1>d5V@G!Y2yH[ Df6I'Aɩ>2kNmygYīto~s(DvBc^qӥ_y)^.G;0+_+4# RP J8˂DV^f#3 #rT!DD);BD GehqY,$J<dH(JgF $pLN /UeCʰeUqDvK1԰ao%W[Mxsc8]=_c`4hXp,z) +fpVD'b֐tH o+;Co$ij&Vr;e\ ]rRqn[%LZ' 'WQ*o/ B"5/}3.,M޾|O~³1[rW',;#Wg_~q*+9NH.j]ee^Hp_ϼ9Go4B]q.sݻOBgMW]SI^L˟z޲YZ7̴z?O/~[.R30Qib4O {IM+GgR{=j{{CtxGػ^G7 ~󟇿zmeGSU?}J 2@H^ZL O6;?mi:ƒ;T<7)t0 @n yF0m?,DHSr ;7N8 ђM摦30 式Umi鼨o(] W|i6Q#RB'" " %-P{sb$ !仲حn 쎤3y$ \:ǽ'TIZ}1-u"n4  \6!Fcb:">珁,CLXDd[N`Y%n|pj1 o\ƔJ%HFc"]0N1\d iǔKJ! .cYjUeNTz*k݋ڽJXr`Ijz~7W4(̒5Rٻ޶,W`ƄN؝t> 3tvz,%Nz}ɦd"-&c[ET[-VM.\sGxʝuT#wB︦Hx%,zˊȩ3٘.ළc6RcxT ȒU謏 QN8eq O4/i˞̯&<8 1b"L/d8Y^IJ*j\oFܷui "Ʉzs ݚPHLb0vֽh]wSqnzT eo8~ZXZ^!DG˭oEDpN{by x1FT ?1s a}BerPjbt?žÁ|"Q}yPZmݾMGrxX+T,qj$<][V-ף|v`Lx[fxy`D3wb_p5]9d:Yvͪ`YhVt4в>剛AX.kf 71(#Z [;͊AM^_/G-zsw^?N QRP? .e)'EL ;%# B1 !S8MeщfC{ 8iw׀ic:J=ϭ+#k!h7h9)6a$=P\ژo2Py6 [Ume P­\si?f&xf02OAdD =$ K,j)` s2Y&y='K =i7,|)d<1c>ͥح@EUչm\o@=kea0%a[xPv{Fhx Sβ%Gm_h|;".guS U=|* ]L>7 $Z`B3 E"ay$%UnfnfKLAc[-c  (O11iKMfDRY(A6jKI") !#zj1`kǴ҄Mm ӯf{ N7׽̋_^w']cI> PcnF(6}L&rc,—>kz#k W.YVm²{!k+" #E.BwNfGqAPD-LDp1]h-mh$}DsZhwz.]!`ٝ :CW*S+gMLp9eϧ w(tؐo}_Zw6 1'֊v?!]O@km;"J˽SO+8W]{XD;rJBNWɞ^ ]7gD(j7vsNh9eǡP2.2;Еj_Sa+g7Ju9֢Pr Sj!uG]!\uh%o;]!JիS+8CtAB  t(J())]!`a:CWWղt(M "]I"܁(s ،sΉAVZP;DWKsvt(JkM]^]!\FBW۶+C7^ CX"\ޙIDD+uQ~<Ct9NtpEgt(Еiz{FLvj7>$v'XvCI[t驱Dw.]+Di QJ 3U*LG@M 㫫WA :ypgpr蠇W6&4]ѻO˝`]+MVоf4-z~րg?5(o2=@ͭC:h"PHM,71% 17VȲ&}S>nNX+"L-B0k6^@eheR氕yuP1fѭw/%RN(Tp12: 8&@`2 |Tr3;^XVT}e`k-zM%f|Mȵ^|Sua3tp}nhj;]!J{:A2pm:DWX0Bth=]!J){:Ah:DWWtfUt(M&߫+A^3I*vlǥ#-;;:Ot(>Ti%B;tpmg jNWٞNwct ]Xzu([2ՇF6>iz$vR,-ʼnar7˂٧7)0LWBޭjQ觲zV#~YފP7TtT?p8n0U"LPjir a>,Zqw]ںg) g -Og5l8%o27]z}Muown]x%ٚEm2wTy7/_ƆQu}E_KIee(Bk7{f8@y \Gl#7)0 F|8B U?߮WqTs ,1ݼI~#H1tLsoc&8Ι4EHG'id*o'깢ABGU O^ ~O:_Bbr}.OMStxw4b+bK Hp Mu W+/# *-#FiTboזy5hЄΉ+$yY F* Mra矃T2ln\jQpb:$C 6gڨAI-tS) l+Za %$ZK <b6V>0B$ʠ"RWxg+ڊpkՑtHmTRWREdLJAIɤ )> ]L5|)QI`Ƃ1EAc2( m}`E=pAE$c#JGwπGSpfa۽RRe584B}ꮀ6dƕ Sb0Ǩ @ϯSgZ{8_!q1m! X 2 ,X'_fb,qL Iq9դ$RSԖ€m[ݷ㜪:;Lƹ Qeȵ:[6(kf JmѾ[oH& iD{iX/YHz.$.hiM[$kYǠQdĿ k7M-Ī!Ryxh*c$y ~ w:b_ ݘ0y+z7xSTUf J} Dƌ6Z/b&AcmQ{3^J@udP8(x+G) ,eU 2ieBXe#j cck@Ql|`aZS!MG283iGu$%k<6֕xCA[$/)5ID^,.zjEQ_U0Vr!BI1UV  ѩ-C^b-jI`UWuD8M2i/{^b_|1ʣ8I`RTRXC8q]?>ô0vٙKv~x/QA&'D="jYSϻ uئ:HK/(yi80J#s Nz- ] &W$ZʍUz| AMj F5:/Q%(|"H&jZV"!@T. U U:?lq@) /. D,/x:7V־(0JG[>cHp VϓWw%,ܶMxYI N;>lwí]_.f3=F9TX] XצXkp`(c#P. 6pi/F9 (T&߾ \*@QR"d2==V n!*(S`a @pAs^ πB.ds2V531bhdr[< hIMGhͱK;l,댚$frP%&JI>2~2w1`DhJ|?.jTn",0wϢbt",U@Xs M=6C:jϢ;K4YF%Gj3Ko^5 RYU[fy/z+렱^Jw afs؀$>oNV%Ta- \n F`ⰱ hjڕߐamȗ ݎ @][tqtU  G6T0 0lgFŨպ[Cm58EM8=zMfPO鐣,6 8c$@󛎗4 LacvNQ}6DUӥ7`ebBu"Jv`M`$9D%. 踜&J#:KckT[];g{ P GVy]\ōzpqvBb_Zv Q .'ӟ~}]}@b\e:a8r[I%?Wzmo>1a vEJK`/Ha6k;?xXlC7e? H{FgO,Vk[qa_3=۶;VIjqrZ?|JtWae*מfdNVa>`+ ^5FySNb^f}F=M6-K^,n}H'e+tF_%&u8,[`3^j\o)4 nD ೶Ez_9?9^\>v&%- $Gӳ^ _ɽT7u|zA%o:T!µ mҥ(|}^q)w5 ae^[.^am-9m۸lŶos~v^q>^ն6n $_a@{WWo|>н`K]liu>|`6[3;sˣOńkWZ|vAWv/o͢߄ay}@Z1z =-#D]BiSx.ϻ̒6=G^褬r*U"LcTU!4"B͜b1O/ 6}t~q}c;ܶ]LCמwr'{5視މXP/wFpȼzfT5,禮>RmUVmqw\0n,jtУFJ#x4s[q֟ źnk_وoN5755痯ܵ_ӳ=}EM_'I^A򙘥Z$o\eYK_$"H.M^!G~}XkaT@bq*AvQ΄63M=f5P=jz@dYm keU,v\QqNf^Jd)q"܈Lq4sfȉȢ/kP=pgvΆ ]kb er7>5 ]맛̶Ub4,tLeu"5YDwJn QwK6_RʇOn[M;wocS-_Ct|K14}6'lqrdYAnvޠ0V~Pϰ0]_âCC=.\ z~oUJ "=ꊃzvd\ x@b9poߥ6Fsg˘?:$Ҿ됿{DiK[g Y},!g]laSBnءT䤬a5i3/{4/n}A;H;̆PWlV^EkRx;DrN*`4~,}K`.2Ɲ1te{-fi'>3,Q.2asО.O۵}<]|2Cm}E;6EjǪ %.LI8"#c9+c|̈29 A޵4#%/;#~_-͇ޞޝvJ;#'dIfɃ$pDjǮcWUvUuIAW' JPkQ(48eAJ/r6Ω d*@rU3F5vɘx2&D &\sF2hĺI;aQXd2>k+{ݲsFhC megP|&w ,݁(f!Z(IBbL4?Z`E7|~G'[-r`D-%3¤)ge`IN[ uEIr]*m.gy :/ p+(Enuh*p'oGaƭۃʆ)D@*Ƽ.DP+sQý"SV&#EQIn1]z}q`.ZrRwR4 8rRPy."ɣL&&PEDЂrV=嬖?_Ռ::ƨq[|+9aY˔8#:) 1RDr#:n(Xϯz=OsIVH4Qg +!AH.="\h:U jM/;;yaswO ,d pEE9Бhs1@|5S"|V i8 )I^6OEh힎'̘!Y'g{B;6b<'jY^̕zYArM r$cV94h;IP/XL*٤;i+Aڮ.c@p'mb,6C"u֢]M' 4q$1;D8㑠`*Dfv}U$ ^3T ņ99-)"XI\gIss>*CGoO8K?;}0xP$!pE]Xdp*T^k0UA@CA?Tz{'R񨏻CQCX6 6 LH(xZĜS)O#gM ՚nUGW-˜TٹNH+cD%k4 OdҼ >:LEhkK|ڟL/gh nRgjUFęRm,I ÙpfKq] 9y#@"R‚3,ySĢJ{A -PWƯI@ia@7HY4"XXH0҂)ATg#g3 b+ϳgda󾐃}B3`}j:agUԺ1^}&*)Y6k/q*TCJkrn Xz#Dsp'˭ ] FcQC@ã9$kJK?NQ# E3<-[:s}q<]cDM ?Χ"SDu. J W IiwP壃@i`HQih+r)ch.KQ+b<2i!1IO8!6HR҇*&P\jBQpr&e(wP s,PK#g[NiNC?# 4wwwoۻL~ZP ϳu+LQ8T#2xͥAjI J M99x$FO3N4N6C¤"p}wR$WҜJoњAC!AP )y2\~t#SA3k/ʰ&Y)7dIk(irƒvBl _bRXoГ}-ТAf%CC(*mna{Y/PY2w_\ `uF$OJ.)k"D}ˑJB)R\SbGEjy.-i,7[]z2aÍYjUu 3zh E ZOOwl6st繨ݍ LsE l*;:6a?Mfl'u c>`Žy,\#×ŗ[q}㙗|;q"Dl$1ÈV@[)$B@ÉEpekr8xu}.`TY9nl2Ht 8/A`8IPrjx(́>V\7Xr|"ƅUmC6 u;zx.%$6/ϵ,=xj\2UɲPC+η7s2ډ@DAk]OY@O/v>C03twDB1"vKHOdNW@:Y>/BH#ŀ;m|N%f\DrMehB_=0|"XY}~hXn. 洳D4s 48!U2$A'oNH7TG$2(vB|&6̲&M.gJFɃL4 IhE!"]Y&r+< 38o;o ºW ~cEg-] dFS-$QiuIu,?۟jk+` }JQCPN2He:Q S0Ψ!B;A*+HrZ5'(bo7趢E?^̫wi￳ߔ ; q?&Ǎ.~r<̻/wfWqGP7\دaOfsܮ8zi]rrq0q$wt4 kFaVgZ? Y̻h8^LfEԖiddlY^:q!#y`ѥ]_MbozYm#%;ϭNÖN%į=!;1~凓>L|ۇx;_qj+M$}4 ~~hJ -6qS]/8}`}`nvYFpWrdYѵ]4N*s6c!, }s\6r\j f#>JCOc>y.Y>zLa[Ex-cjR̍uY.N@Pݞ&M;k+ QG Qi'gC7mitpԊ$cIJq넗9s71x*@EtS{FE6zGZ~CS*bW.y`1NJ0d`yRh.*RwUU7Êo^*(q3졇h9j]w]w]kPrͨ&A\9՞[լu^4S-,"T" y`N.*|^|2J7(A #BPRj"ȝA;)LD$4B-BWL 67'umޓ kC qG9yy[EϧmR;| %]B=dûѰ>-ǯSQ-mՂכ x3Y%_uÍT}* Y]of0c)Ą *NTR0A(cko캞GdV&ԟ}:)L>-}ǭ1ZͰ'T[;o|9>ox LNWwجՅ-sk7I^vc"Y`C2 v76 a00xM˒l+AndMI-H$kU.F?TYn;Sw%L {MNw-}eҩIe./:dP:(Gڢ{ߢ6_`sƗEEXmsI^:"bI_b(Fkm{\6gE@J)DREBIҗxA{\pvԦ cX}0'׼_/y7"gmEY[7xgCE l]*/gӕC"$mZ}yKWLKu\<\o^^ W7 YI=c؇'\ܢ2KW>ggo[0'rO*rq︺Ȼk oٍ%d(cvri'|/LY}$zŽ9׿ャ}9>Pl{Cgvmv|= bX0M{ ?3Mnhpuz5~4;}3|g'Csyuq:,#~ン j (n>ՍՄqr}vu 5=sI-yϧ9I|g]ہY;m΢)@B f2aa:vV@H55ElQAQ̦Ƣڄ)2n\O)d+| X"e䬦Pd6#YMo H$C_IHl*u%ߙu!࿵U1c`! 44A`XP^PՌ.ήj.=}Wm'[.nK7']-( x)5JflFzҊ3^zWQnmuqyUV8 [xy2]=?99pr<56i$L IB$'`Y6PLRmQ2fVOYUjl (XjI Aj `%[ضBљsaE5v3rkpvL}͸c[-G="اԇRʠEDUE 1d,o ިLN6oزEI4Շ%@Z"[Ȍ %![5)D(co^lTg}c}،7SU1FlG`+(d{ Hr^jsKxd%-ZIF/r.4Ո}lg励EU ! ۗLL1%ʹPJZSc،/:^γ"ٌKՋX/Q/z!G^ ĤA|1 Bs R}NC^| /wl*vTX YP=7fV=Sf7i.OH}dOS8ȯ yhj]Y{MF PN)kˊP*}tj'^MOX',*+O~75nMS,+Y猓5: (륑56L C6S0|I,siݽ&914L;~vQ,-G3Kgv; +U'g}='8d7]5oCRILsG0]1lJw$_W>֐țuaRÁ;ֳѷ߼> &KμL-V-R[25:]2YcRlu$?n{-wbt3wDfqTd9^Gryv)Gzy(Wwg ж q]Ç;C^ܞtde)61PLQw.3}SH39:[\,[ɒ%CdN($w^QbF\Ü1A0ZHU?RjcdspDB 5S(TWdfdiUPᲢl`bTE *=d`z5#gC9@O?,m$$ U9B (AIZ+Y1 'pF(-E3E~f=2/g"k |*j$MΞOQTG j˓}b3Iϋ*x`,x56e\V8{h8|_ĖUq~!z2Ild5&Qf,]BGrGi: DM1f,o·I9 5!iPM-u-DN i,7 IɄ2G;X*ڌCX:օ5Y|I_hQRvz8e5aw0f>OV.k66OCNVX;lћja9O! j`P#_I84c m<͋vG70hYZkYɼZ! m+Q`HBzY 0I)UE Hc2 U>Ph]׿9]GJ-ݪ罥ҬNF߳OYR"}xWTY |>a .#:z&tD$2[[rxoۊF`ڑ0 .BY+;Fp4Gx'3X(+ P UQE*< H|9c=-*-01%$!)']E |-O $H9EPţQ5GYX'^Ժȟ(:}?y lܘ=Ocw)o?~px86`h='Vh+բ>%lC9R"ZrTy-whբc-hlHV}ZZ(0 I?,AZZ%x(K D &:g[ǺN$EI9|y8 Ӥ}NJ67#g=cx`)?Œ)PR&ЪSQKB5$ ,YI'6ÄB, (uMʪn1&2'Rn+)%e$YR,: IY(H6Yư8*B/XzI M~ ΃KZזBT|([}T91dũ|IZ^+@Er3F7>%c 1p QiVZrCT3)ulĆbBqB![^F'ȱ}޲. mUΠL@@aţEc)q.܇i?cbE`0h]3ߥCK;#o,=T]2@z&CD"bV_{^ɸ]/l3O&3 FK)Lt$eM8x3yŒDNs396FFɉj3&8n!Fe31X,Yb^HfF9pʒ)*+jQq_\8b8 ˸PQb?a}n: U}z|fk3/W5ZU5{Gl~Qt2^dR5Y#3I3֧!6iI뽳hnl?Z(/tēs3+S{wI0کD? 0wj|au]ѥK0ƾo/rђ=/m' /Dʻ8E\7 dP *>ȸC< dJy$Plye0 AyphIɓt ¥4=IJuOFWp ozi#Y+{<;)aݱ]̪WFtY5 {=3m30H FT +xby&Rh$ՂT),8d<4 =|m4'gґNDh!S_F.*Ȍ:0]ugkgJ%%0^ o>Y <\/ipWAۿ&ܧ=Ol~qLG"h&qv~8)ǒt!OC@d F|6ԣ}ڰv69S/=^s OJ^5a 5ww]r +U0v[Q!ŭ%KVmr7 Gʵ}r'O)}D%PC{J܍xHsDWEhY!-Ja:nxJAvUY螸$ѥ]i;%朸- cU{N(Q#4: US!"2sd`N9$,hF,Y #ŤƩȜ8LMTh,F~}4֓*왉Y:b=/1%I0!ZQ3!V@h()~@  iWm&m\enͺq켫{=NE^xXuȻ̓OtB͖\nu>|+l-y"$M0F8LVakQyiKigc3^n 2jשּׂΆb7[+ +3 Q*9OF:$GͲwch^ԨetIrƣDP4kN3mX/ :xK 4Zj SĩIz!&9Bрe\U 0rc3 (SSjڣf\~(Lom,o~z5ߓ#Ƭ F]L[gy{IHnEn8DOqVI 6}uP'#Q.##w903:xkP:#g tPXoT])fa۔.} {hu(Y,zPׅg%6Rhb~M5t痻K$\.As2kfw^/:~<ҝ97ܪrbѮ9w4zo#R\8ulY]1Oz޼o7n[?\cZƝa>m;ze{eJ N d#6ܴ,Di/?tg[=iJICe0# ,+Qic$HY#<&ZfD|#JE ׭'@iQ[IXWSgl+;?L/Vq(IdZŻzחۄuw[6돝Aឤih<;g~ig+WÆ•Giqɱi<!gbPM**a 7(K*gbz0/b4 \C^i ]>*z ˹V$W82TV:҃Jkkq-Uw}՘Oe˫?bjW @iWN_w̫x5"/:.ͼ7y&ߧj> ,hGby2>9]vch4MXt%_]u$MzZR\xwKg}͈f{471Mg>Q.Ouq6\;lpzK7 Cl\ږ08"MY8G|4[.P>- aPE6Ka(ODp޼¼{=8XPjt*8#?^(j5Mn6֧q:׋OOm(|un%;Tjq\HiwcY+k""zf4lګvY Ĉe!‘~~-@e"᱓_,f7㋭cNw{rqVZp&mn vcpqg02Ȍ靨o?s:#K=h-wK`Yk<F.d6r!!{56gG AS ̹}"H jcEy]Y;\v" PUj63V9=C>銉IW7$_\/kg(pJy5|UK_ W (Q7@19 zd` %*zU19'AxL1jiMd%UߡH!Ȅp{JrLtU)@xȲ5HfHYrcrYH*sap 32qRWf\م^CVSgh٧ \~+%n?vͪ)zzw8]޳Q197 Q<.$yH1R({m]JDseFMw'J<ܸ1gzu_1|j %R 9X`PO_6^my^h_]%ʊd[gv6HsM9Yzゲ˒WɁV9Цq}z˭t[z.B~2DvҔb2; LF1mb=zdm5ZBR PS(bLND1#ƂעicنEUV p٨DI AJnw}^l_{}ap-i&yE5mf_\ygM݈w`jz ¨0-z ¨0-z #z[z ނBUBeT*U/Zj)VAZZ|Z0Qoa[FQoa[FQoa[xz fïa0ja0j/;K0;=/{CvA  Ӈ*'P'*LPTk LBlnR[yJߝR/)>"-=.hsO@, {Rf$U;Ae%X5;(C[g\']}^oj _eU χ|rp!YsOC>SWx7} Ld/R/`QqmUN\|j@kt3Ioq?h;VOwiExUB\ʆcCgiՒ)7| 1;ልRR' %MNR gk\ Y FQ(GxX9+vUr-QGx"kN!f0;5̑ٻjHQ"KmS1XJ X Jkd|p`7q s/[RT'_]+{OOIPyue&ԧvUڮk͋ivåFȵNG7؀K~"˵aջIe$FO//dc6Os~A9i_v77_|YRd+^ZE׻: 6:/ iߛn M [wfMZ5njk_VLw^;Pv6YUUf@Մ%ق mN~PI혷n(dOٕZ*MZ [u% AǪ(dV9zJ:({oʾeȈyDž91vbd (ժEeQDt>}xkՒJE(|PӶi/ ɮn&̐HXg@JBT;j7q6*8X_ǝ!<06*~?a~a@Ji]"0`1Q_t*8O02Y'=aCJIl4zipLK ɔE`DQk3*|a7x//\^o];9 _-г/lr_ +䀠8[I6e6hV#fց6.*@#{bB6l- ً(MEH P S|\lQIE1kwӎzm=@Oɩ *W4U+e=z:F$0- zf VV9 j-*ȭ&TWRc&P社??*xn/Q8U]Ld8!0xNI(dd+,t̽ '~گ:<4ӊr'["O'st휿*w7g=5MIakXJAK1d+: dr0b9ubKOkXͼ䊭ηl{ gTewPf o-e|:;Bsy/m؁(WL)$7EJNͲqvvjWv/3-ݖr}?6eSۥ=׎V^ye;[aڷ] }*] bYMŕVu$ udƨbE(e& Rl:M[-DJ(2*-?"* ֠m:F1 2Z:nދ&R&j{}&o4T\U*GJc)YEI1bTJ\Dlc Kh)_R>خ1;1m̹B0NlF1e9JBl $6U4ڊ~TJP|P.ge6O6vιn[{әʬB[-2&44A{J)(yLȯAODW;= = [ALyo I[䋩&A!b9fdmd]}0{BМ΃[!1T0 C0$s͐٢}BvrTEx*{#̸vq(ͧmR."jUS@19TP[$Y͎1kVYBb%Brr^;{_LRPlN EdJfC\S5MV*b(m|U5^fdQdihQ*p#e}Φ:EQfV䱝8[ܮ|f6WUNKLZ\4%sFQqc/C^U5siŁa=L |z-t[g 2kP!52(5ЃSǂ^;8r:)4V7/IIϋv?:9}gtŒFb9Z%G%"> |t )eB $qtr3<ۗ&9tMrT&9j 7#/L[ \)uSDrqJcfu7)F䶁x1rkE[_3z: q yak*v\X4=3089tLȵ]p/l4m:z)ghl`4}gkhgd'2# }f2Z$2Lg=8](ٻ6n$WX{xsU>lžT퇭d?n*6E*$-ǹ~!)Qȡ4UIr4ݍY$QdX9 b{NKtX".pKiW!}Gy[)7<ylqMw4mb؜͟.v%!0ziP]4鹂濐To ^r跛۱MSǟƓ/ "ݗr䮞=.ݝZc/v(b~ku)31@.Ӽh㶦 ˬR ӕ󘫒g8e{W Wz ڮv=]hzbk}3!-ӭ#-*R}F'JJ<*+*LT(eN!qR|U` S|앭)>lR|ؖR|O#B֙2G(S6TP6'm9WY Vij$T )'A|B e:* yZ?hdnH^k<A㡻qj% cVs8"ǫ/Nj@߂b>t Z 1kKS|>ͧ_h3*Faδ RT'™3IV<ˆǙ=qfGq!aM):J N䨌%',jlTFCN*oO< WёdB*. xP$0ڃ׎$Neng& jƳqzWp?uez̀(=T}zܢUjBsd/mV$+ V s :Z&mZZ%6hˠG'ZA *SBV&0'SD1$-Xu4>dd  ۭ u,ΣU׋R!}a{ |vJ~?~_Fb3&1rb@T>cPv!d9GG8G(ܽνcx=AY!'AFEB64,ZG:rJY&eX)x%wh5*2N)js sa?/!xCyijAOloX{Dm`ˠ _WtiO4.Jm9j sj:^#RdUҤ-zXor>=Ųjbgus-8úGNk 4X{ \aQq"ܐO,%+$h Ytd-TTZyU5Ĭc4Ҵt™+p<ݮ4KqƂ3 <-s@/h:ɲ򆒯!Nh`U*{u\FY}[ƅU]=l JWJ+tXI U0"ZW0"quRuk;ZH*` FvwBtUײSqWEZgHJy^RL3vPp gi.Xr,nb &z;s~n0"M7`䀻w IVi!t n|8juPLߒ,u&`2NH9h0>`j>ߌI"~--}C8I|7}$,@+YPWph?~r;ix"GU@BfnӀ3}Br2=^KǬD/eZh.ZȭMCt < UAT,ӿ,@_xZ%ާ<*Ge.y?eJQ[ScNVqgM4ƣ:#Z酺TD8_}w[G+>Xjy֑/}4D@ 'QTv9Jv}HihWˡ3K/5/oiϗ8m ͠qL0STDj1[kBTPL%]$Y[_Ƴ/2*y/J1!UgEe6 / ؐRT h!1tﺕr[3},cXnzCwڻXeU]2L/D".ۀGABFDI*R"} 0@>peVN$׆hntrk|)2&x6Jc)jɨJ-5q{q d2 n;YKn)=-{0a"dgw&Oxe M2m+H^VA _i΄ƞWrOuO}Ovđ(V$90 yd)65 nš*uV1>rnR i)z6G,$$g'zЀ%vU+FUu+19$Fc"}j `b.R=#SLVƺdce:M^k#Kڸބ6͠~o֥6ܟvQ_?-eOU7?nب7߽ibfiqL{?%]b 69%0ob(XJωwL+Nޔ}o lgMo =j2l\0X̡Q*h -ID9%[uv˰wR ^PQIUAaDPi){;s1髁Ci)"fflv*b#T`K*X%+2a]B Rح|sܹйUvDZ8r""jƎq9e]$uUnufSDK]MR&JϘˆDތ Frj<ȡ5qŇvHV4/L75|E7zl,Fn&bDKNF'-椭B'N !m[nk yy<Fʠh DgIˑl2[4;$Ѫy4B M+2)24m'l21cYY`6۔ DIٲ&ΎvV/V{[gY{9+}88:k$^^QCVW=8E %zVtWw׻D0 .8ӍuJ\MZ "b`qRܓ@$KQo_f.̐6"##%#^*&`[ZUT-7Nz]mgb8;ѻB8ĨD@d0$|UHa@}F䈶H"ʹUg1i\?m9հ0MÉ)! 0jG{"\T׀{ƒ]F*dLJ H f)H9+P!8 QFUOmyDIU_g(rhķy#OaTRPQ Ѭ,Z {acOݑ\P3 ٕ:xNЛiΌX4&ydI">bJ,9RGs~>DPix`,(3xV0I/ 9)CFJ$s`?"E;Zm5y%@T ؂逰Zo) <x5]bvjnzIOPQ/LjTo6֤x۫JbvZڑ}Fc TMВa͘Cs^D =pyIs,MG]tKJ# $\KrskOVm]m:DONu\R(x.)33r6w0!7ތ63OTsbo}/!$`1|)?lȒ5%E6vICd۰%wWݱm[_ͯe1-Z exZƳꥇw5Bی^9B-Ыz[KFQ[dj~ @b^9|*Sa攃YO-<`DTzR9mQt4D{2FY;RRv%{gq)a ^ر3pR/1n YIx[?vI.$?ΔvwFy7ʲsQiޗJvnSJE zu?J+Eߋ)OzXqdaR/,t+Np8(9Ciyd@S'~-ppEb #z!R%F$㑅HvDꥦrL ` Xy$@=6&"ݱ8[ڛX])Hcx7ЮW~SYn~}9<xe{%6}7y9ؙAx~0}@<0aw,Q,U*AF,G aecO82m.^}1qD0, ^+fB(08D! ,[L)Rg2DThqR"!@w) s$:3pم9wZE4iK?NAw[VhDӼ`ä-)>d gVU;fW<8&x.ꩾ3wѣ!XUVBqHc@=I)ڲidGRL<ph}9i}}lY!sQGyt%Fi=|o(@0FAHJl36rF{,t,غgf4:|"'Wa2=p$'Eq'w <4T8Z7b,0Ua4XBI[9 ]]n2fh8Q8oSzp vD^V[!5f*cĪݓ-Hڽܦ{2fAC,ػ2L~֑`%H}~N ֖nv#er5u[V jW?_/Rں)]RC2\gސ~NH?•o 7p%3e05- 3b 6bj-_ְwhVLsf dVu=a*ՔjY=HgoGuIL+4QӲds݆ɸӭaN@ta.]]5[oPɑHFA҃MH{D'遅//gpvU-R>JŴ<=K A F!זyE΀R{ Ky> C)9nXrxnV%w="X/q)TDfEJb,S^DSDɩf0M[e.0bc7&,k K1^xAŠ(UzW_J4muqp"&epbbRHhINUQ 'Aų=!uKH;y7Q[/`HL$u{b켗VShcK0 %X#.=ӄ}ӛ &52y5&O'_,̝(h,Nǭ.J3I89~2v2dAVBe&x GQ>d+dV6ɍh`nKeˮMD嶽i+}D҈RMR۪ `jXJS݀QHhԨ26ҜY<}Us]Eb :=0WӰET߯킓(+7V)|qGxjI`:GjS%__;" V 6~\ޏ96LNnXF6LmԶ95u΂BC#i`xR7Ӑc1l\VA _ΆE8s @ǯ=7gwg߼}:;w߀ V`Vzm 0l~~<?l?4'] 5CSV~Kn"D(J?d?|.q5T?"2Qo^ɪ6U]4su "܋ )܈q[Ħ|U|H19iSJ/,-h-kOFF1-8lxLRRc*w s64%/߇q8P8R,1N舣(K*JNTZ&,jfTfRL+{d?~UnVͨAؘ݁56z28LZobxq; ldv:&Vt>W_*[.:?⼃8LS<`sAy$XLj.tr(vfZ؊XD>H]6 $*%R1qV{tHbaJ[m˧.ghrY4]_.Bv/^\R] t#^"A[Y6,rUˤm뫽,ƷE `_mŸ@vΖ6m:M˝2 =$y_XKϻU57q(`VbH(L_2RU z޵Ƒ#_<% h heŀA-ՒJ۽`]XbI); X"3#@GD6VGS \TC gk\YO;mޣo>U$Y r-IꜗT2@0^ "D]z̉ٻj td.Rɉ(j֦R"V-RJ)y$;^t3UenSsuV}ryjr/|Զ(yZ-./v识=^‡zqN:zK3riKUfp/ >3o ֠[KZz7T{o7O~qۃ{;ϼqn]^8Bx0Un'Wsվ[ݲ#QBs_zhvCO_+&Ye7{rss 77?(bG,)c:_ pV;Pv6YUUf@ՀlABOvt+QPGҟ+j,EhՕ2`@:V˜Mf圃jWXdeMٷ1ɑ>ɔ%.PZ,,s6\mPT%9{Rʠ03R{Nv5X[d $A:hO sv3g0XʳNc'D%=3mo' 80|XMcP / .xtNIPsVsrه|*DZ54YI=CKdJ" ۤT%IΌٟa~<㵋O>72ޢ| y>ot~~l~gZ;tgkAeK9XR 'Mg>ZSThV.俜Mr*y!j{0[JDLi,Ȱl%VY0!{$|TR[K"!Ռ)u·ٟ~%Ʊdn18eĝ | 09Q41i'abqɻ >Ũۼ^ )`"kFAЇGnN*VS["EC¤v J m>?#~:ey$yq\\g7/yl^wA\# ^EQ*ȟC2FۚܢS^͇b|ֱU,Se]ՏxՏZkUOj6=(o =nZXi{aF)W?g~mZWWՁ *;Z0jI`8: |~Ƕ[ujl e|i֤ Z_iWÏK D|X92Pޖ Q]"T`S%.B."S(=Lytb 胙y #c)Lʧ~ͧ7oセ^u@iPle:4$& S|+8^|qJ 6 Nff!P`N 7AƮKG/lNRXjvqeėaU_;t2קc6doq/]BGj)ŊXT1[$HuաV(ctr]#8Gjv=G}I'G''EBj&s҈FIfջ 9#A q_zܼj=LaN\$+&fBQh 7꺅|Hr_x6;& 9`HJdU' "||ѓ֓?ej.a}[E(VCIU98pƌZZ})uNk]6&.:GX1 We&C"=*zpBcZqXk%a.yuyW:4T?y!cU:6Pe+' 2DF~P=\1Fhvhs2Y'dd48qqS57A[#P$T rඔM#۷5* À9_# I1hHG%{v9 ISc_%[Ɨ*op#꿽Wz1^>|6(~U#֯-(hL΅ j9hcLZx@Yb$Sv~7S-89jF ŒwMd؊V28Yb*Ţm_jh2"#KLCDee1*s6Pfѝ؊=su38[@npTy9b8\2%s5֨PoʵuޫXf%Sa ˼bY,Ytg97G?ߢb͏j7Zw ydR8 c5rJܪ%.'[Jt,=o]1Ů}lLf<] {nû|GYmeNܐtTC,T|e9R5 jt=zܓ<< 8 Dߘ·EBtaVwo{3EEӾr+|Kby޾ImnJsnO5R~З?\ [i]ȿ{3>?O6)z',Jqn@V<0Dzd@$fz!,?r,3:,tNY[ sSLץj.QP< F俬+i4Jw?;Zlb1`$:jaA9v жz[X91{^] vLވrкXQ26Qh=)ɊL6X!(6>)] Y콥JS.CB1X1JbJmqZǾi zXUa9J| AHOcUch6FE0Z hG JJiL)67d/?E'>i#E'NZkC' /]~89iɋ,(?o{g/.?]lQikǎȇ떵ؖg}WoݍmsSv1)r !-g$H/U.Y8йܤbGUA0T4i`aHEX[s/iQqQ ]>#Z>VS[KFt%[Ȃ"!ƨCeFZAQp>N=) =cڡ)mjk|h"G]uCt~>K% ALCEƕGl~=D9 t~N83Z3pbJᐔIUz]ZJL>j.bjfr})5}]\پfS3ےTUS%Ş>ͺEbm\0i!RWYGk(mؒ->~lqRtwwo vP,J}PDw/O4.T`Dhj$UZY.(!$5˂S! O73NR=#Ϸf ˶{ G1j 16c8%zb9lݓ.l:E* ˢ-h/C7tFmv»,MltQL֑X3D^ tqo}b@"zDس1)N:d!:'^ABI&=Fz$2׉yUA]Y$"q:ڎ`K;6f*MXp&g-|vii,z$C*9ɲBCɝA+qUZ.x mB^Isz֔s=\Xzc>J>*#/<!zFcd23`NLƿ"9oE9lZN G* l!\} LQrw; Bw/9[֫c>UƷHϦ3yĦoN?<ߚo%-::qm47YevC׋Ux'g0ND%mX|Q6G}Ab.:eli][w']nlފ8ݮ[=_5"W`J0L9z[eI0'_gȹ )Zg 9sb3"1wjؔ90K F)FB!hqv[#R[IB`%zOYrRc9פH] ȵ+@kd쌜؝].ZWsz,ܾF̥?vv^4-Eg} *?MO.#U8r[MVJ#Y&$Ǻ\lQRdA'dE2n[Z{> &CRؔ!d1tIٲ#vglF0s_P3x(j{ )sb6"s6uyB&͐YQ!I"!lԤ8FR.nȨF}x91C 0 "vGqx\BpIy"h/~db—S p%C  p \p&$RQȾ$RD#YD; %X:Fٌ_N^u \KKc\=. -YTR lDY#Ng~|^.W)qq/xw<c<|F  M9 )Q[䮢fy"ϧ=bBʦ`*`W>GU $-B>q7awUb>S*E-MqCl:z:;||;2i T̒ېgR3N mb:2ͽwM^M W}+\Q/&-zK11;P!HО2e'P 椕N`H&2.&C*z㻑䅳s}d.ϏlɬSh{Lˬ*MH=*y4_Ëcɴ9p~Bh^{«W4y79k}ȕ ן|j.UQiVqFrP$ʚFFΚ9ݛmW?/WШt}p0JR%|/K5]UVe]=os0W`wsW}gv>s}M <{| 2ހH^L?7.&KRIL<,Vxx+/鲤9?ߒx\e/R1AR:[v`Il0x}(/v>];VGAiw"(amQ:-$m Q 2PjQ+!Je2{D.Kʙ ui@P(Z8kgl=ʅeVB0V˥x+{skJ̔[maZ^˵9-]nD3㤱ď؅!OAiCf-2n+ET:!6$4 Ӆ*yPoeA:Ea4 cr5$Ht 076W"=1εv?֭c\D,8:%NG0zBNdtYn &̦ggab{ni`Ⅰ3Y !UKkȚ)mٻr<{isQr@ !/ԭ =()]H(TS@cJ&XQ"׉#x1ƻCڄR4XsC?FG׃l=^@~懗ߪ[]*$~ z~q7  {i5^uq{ v >Q{$dqʼn]V?:d-%5A=#<=-U GFKRCȌŻ5 M4oxl# xV\N|44C_MOx18 4-fE ~}8;=;\TRk҉٨>[!z4!ѦsZ'NJ.J*&|\{Ey6w77~8,.]UnI"hx|2[vI4OƿϚ·~;`e$EHm>Rì2ڄIO4e*>.]_9<?Mt}^`6L曥qSSM<ΆA;߿+?pwGoч?Њ'r*J Mo#>[C/DWCxˡCn/8㰎%wblܖU,aQ@ | -B۟I .hX/._D˸?Mw9nz̙bwn$YYj>WY >g$:K\7۰Dak)J ٕZpü&(T(kin$Gz]-<H#|x瀧i6qwHJ$ER$3C@cEy].{:C*:4|ӗܹj:9V9w=HSvz:˭.;pnypKi[QMeמM%I+D-m!; tpH冔`A-HN8vv^*bmOGFEk=٢`\lM?./=*/&}Ro75#,cBҵyO%qFvSh1|͍xư{- xPN(N^Z?#?WZzjK%U)/wK~b׷Ds>/J_\d el`Fߣ@fav_h#>я ϱfg?ݬҚmږղ۟0&|M}p gՊ T#qiu9VPH fj!TS3ט^OJ_$Ek!z(bMN,N*g@P|2-ۛmM:zBZBlbZ#׬r%zI]2%$62kmyγ it\0hZp\GdukkwI3ɪ' 8 #ʁ.c_Mg;.sWTwgk_wr

O~X%]֗q^6M!YgSHoۥbDeQ|TFoIX 8 B1r~F+JH9iЉłdA,6 =ތvv)7QzXQEZddɚ/EB p,l"2g#PNBӊLL6 HLd$DˤAdelRBZ}$lg9 Kݣu_GDr"t6^j)J+""VkD^riwoE?}b.':䙗1T zP1MTrgA0INz^ƈ̳c K&'i?#P;aLdJAXϐYqTr@SK'HnY=בx(ӥ)ah'B8Dxf{RD)cv VoGd+=USA( A0Ӷud!6NxWц+<=k׾XG5 yu֡MϳE ?4f9-TukD[$ qT3i4RjC%^>\,2ZI1&+"%"6c=\!ML(Q@mf7n|禜!9a<}ҿSz! ^, fuU](%*porvF Il}OO}Jќƕ% 1ޏFzAiG"?dDVk˾k?=N&uOݽ~{;vn~HDHˠH+N YR8ЄR3]ԈUR"`5`$4.2D)s&%Ƌ/t|Fx|җ=>6=:>6yV ]9KAZT1Ȕ1*30&&Q0`s50oCO*=)C7OZhAPT5k]*}_8i~ >y"YϨ% ?$\^i{"šwEP Ҋt9o.qpL-6,R4%Ōa$bHQK7Ib`TR%DV%'&(mvFN (g&'C(h8^7wjXGWti!vɟW1E.f7ոqbKq~W_19&hbD972(i{>2!yGGxPtyƃc 4 ٽ5E!'@EBF,s1%Ƶ2G9x @騦UZWz7#$h|}Y=[;hi:$oέ+;6<i kjڽ9y]59?ytZPYy:n5+Qeߨy|)_}@K/޺#/sZhl>;bc|2;̌ t롯{$ZR^#gaف97Ǽܝv|4CBH࡞.@f2*AsubIP{0L2 bCr+x4Wmi[|eYaΤz& gG3Pt83pfTcHO™<)Q?E:RH5xbj'AK09j/_aHwşT^e% PRj"ѦFLqMD$4BhS2G /Z^m`Nz%8>&2]öhm;SL=ByReopi<*Q/2N K5j16iSU2r&2Ӻ+3/IAoS$DP1KSB4$jl.)09辰(Ebh18o3 RHF%%YíN*d}blXͻTjZo:gۈMۜ q>ݚfmI͍4&qm4޴S5޼ŤiKtm -"W8f Fgzd?:y)ۧ Iot4DfW .+0ky0W7>z<"LcyN_p|8X}dXM59{٣~0ʦ74SWd \!#EagJBFoDsWPȢ$A8a5B."Up<(w_'dk}h}+ZCvuJmw<#D+ @Wh 虏Dq\W!K-?; 駉:r.pEQB(d5smb2>qM\ .O".DFtT >HgA-GpHk T:RD 9jljK#4Ki/ɖVIqs}e/)7I \A9T((-bs!Ce=edK]F*Brh4l=$Np*$X1V!TrKPHY/*rJqKEd5be$77t@~/Ït[\[Χ{͹ZK<'ǚ(2REO8YA\8 wq&H 8 A1TBlGI L# -]Xb#gvAX"qǶRvR`RRJ O%J+Rj*͉ V Ei`V"NCSU(뛺2(PքH<h8 օ+ 9agC`X?vE"N"vq3@<ʊ(pKE>9ƒ l>QoIdƔ=h>Q2I*ʃqMėHQ{HiEI9%ӈSP.gK,%E^X.N.vrqeATStD`h^)AN'ז1ʓc]Ÿc[y͏ wm/*qU\LяZ+q67Cŧ*|8/MNkjA@mqTBD.q>V!')ڶ4ŭԬz䬯/FOW[b0_<_8)gH@$SB nÁDVHu mgX4y=ԟWZleNyiIjҔn& yY\qm$ aR sLxʈa"mr1 :% gϏZdVy&c8}^#nY#qLK-S, 5K+5 4ޕG*ՉzF2Ψ!BaBl!PzvQ9ϲW#a}T?sSϾC p>ЃK]*.UfVr7o?N LjO AT3&CWd}QW%uvLFI/Sx]jyU9^(O\pzN3$O|?6PY,^U"ʹHÃU6s&)rWc|F6]8k3*Nӧb?Wݯ흷͍.'  P9zc.8pBn?%=IʖΗuÖw#n.F>QZ|8_z|g7Yrj02K.uٻ29=u7l?#j,6>i8hT"O6+߻8Ev7^}?7)ǯP G`}>f` @L¯#@?;]ͻT߬kv?|\5 O dYd7`"^6@*̾#۟71~iKD)l0J%v6ͼ_3ˑAM|oN焻/V⣇8A湈bZ$qr I)nRP"@ੈZjGӁ6̤K,EÎ'+ĕQE%<L;FI &6R ,OWRVzpQYM":;u|12NF!zb:KH{.~◌N]aظq9߽[6>8 ?.CzȖ!K,|3jN=jFJUSǟu>CL*\'qR "m$(H9wu fg ;^FU߫:+k:G/ZT9To_|\x[Y Wg{?z!w?{Fr_!%CFÀqywc#bA OgT8mUσ(9e2Y]SU]U]M|lzemn~"Q$Hv}OÃqكzVZє{GMxW'8ULQ,r1"MZ)6*Rilh`3MV%`{#)Ococ2ݝ %墝]q4NOW{^uBSMO 5&%%J3x&Z!u#,]c GBXF$A$Bp` a`XZ/Vp- PaqP!1! O)BƇRydȃBZ3E4^ȁ9)9 Q sɆN]Aܒ`4U~.Aw:*3Iލ[jyTu>:2 Fw Pq#p S@Q愧" ΰH x|q)zI$9)qӃ(4R4QJ=bGJQSɓE$uݸIkYZ#8NXr<{}C)|+*0,ѵ KGٮyzj}CRv!u/(e7id&})O L 2ž,q1"xߧCK,n.{ol6IjJ}s¼< _렠Q[y[u3>\h]` X6350zUV~'ml2vM@3ĚFYfyԕ޹ΏSOyRuPRڶ{kk^ku5Rmxľeuum g AKX̪/UT3HM>ꊦ~+`I]y'ح [.nG'u˻NI}@fkEygC eU`Gr]Ix뎠.g5 u#FS\MkwlL QhL-5&aڴ=iK6۱Žm۞ jf&e`0cb:˚LTYH&\oY5+:!c0xD4;qtSOv[s :a8IFpF2 ieGFb>x68#ġ^$N W߁9>.Z|i铳6XM *U0 i̼KxŽ(rҐbME% tcqTDƞDWSr1^xꘓ5C)%H2(+.q mOv͐ %b X ~(|$ vȇ]m|{wܦ~l;i;sKA';4ݚlt)CQ(E*(.9b1XV^#Ew>u+|%6!`B,2jvVQROy, w{?{)+ +h.Gp}ؼDUh34t~x- +8| x9$59-2jRLEgV9XKBv$ \ri3ɲl~S HitL*"3 3âҌ;$2U 0J m"rNJR͡Զy1?5U.0bҁ#pޘ`@rRt^D0ʇ(OjW%Ђ,D Mo"F *RyN?ѥH5zm5*)MmZ',X=a Z>< Ήs8>F0LQ==῕zm5jq|fU1  udת]Jr2];G%We(S|G^_(TM6#7<rߟao_É'؁Epa ?zw˛/Iavi>ue܍ƥK`uRE/O*mHW#pF|QRw#׀]4dEHEtplW"D"\ʑn ~`s$;ͷxN_s#IJII6< dI#RRc~=w s6ҥc8yN])'mtQN%k%'\*-SX-AK,:Gm&ZmCv>~Q_Z{A◞&~IbwZ6;M$C7[?9d7ł9 (2 6nOGhK]wN'a32i zG:tǀʚ IUyå O=cF@;)OLj9+jh}Lnr ZMCI7J0Jy0/_LRANF߆Md|wzteڡ$7y:)~N_bƋ[Pcv^jWwi3j|'?ZUT=%j_LO vh$y[0ba3RJ:wU =]k&rPI.Cȹqǘ"i5(f euLG eZ30{-#hFs+%";0:=ܵl>j b~]wu\Qw<Ȳ}kY{Q|.(SUCy_ULٕ1ts =\T0xڹ; ά!h®4IڝI'u1xei좞(Q:9yν]m{@0W yevzf[ൣ5t߃Bwބ:9s`p婷I/I}vj߻U'~/,Ums洧mNOJ͍MF3=S 1HEVHTHoVRg`&;=Q{dGTHWSY a;$Q.#R#. 9 $=Ndmwc*%Eο.Q 8{ƈw@FakKyx:D"Uzd LU,+&) X3(qCvY`I(EaaR"23j6tv3j sPK|t wdU$G |^sJɌJT*ǂ*1/d` zHLrzPQT% 9w`J\J>,["~9-Ų\:&_g6*W.r rq[{{ 5D yrRbo5P:R8e4^j"QǾ0s}/ 2w?ko*q[-0kyĘԁ[,A_oe_M |Pk68T"6qh3va`֛ܖ? `,TlhsRT}UČj/_+A.оvtTaj>-kݨN "]wq2j]ϮKT/TͷQ51'+9X=J.b>v?~ /0Y0MapT):Di&JC97ގm qy" Q\g[t|H[fLyŒ(R`K ᧅ2=1)0sd'˟Q鐠`EiTV3OByGGR:ZJ΂NJ%ceV1G EέRȰ, Ƹ#aREbD 1rBg.Z>xQe^޲H }t#gj{_a{\$o ݧ`W[Fr$ٳ+^FeYOdMUŧŇأ̘ @O9A ޺pE&3w} ~t8& }n (f8 0Mfi2er{#Ub1EB~D\j\h|FLIE+.$2gF>\rP||nYoG|87 q˫UʼׄzWW"cT4֢n%ѡeI0Lij2ΪdS:@ggOg199%UP{N5qq”=j&iԏ$)VqY)NŒ{} .,Дo (`rF1F;qsZu|:Zz@)rgK\3d!&rf4ֈHb N4fBS"/ pl$0+b4sb!svz8C/]x^]~$M[u}kC^Asn΄\2^(bFB3GU65 1& e㘅F%&JCB;!xzg$Y^I+1#!1%"(MDeBPa썯!;I\ݭ hdoJߗWh &zGf5L 8$mK$BV kbt2R *|>C{SK%ĐxfQ[is2Lp4*d(NLP$q'2BzQ7yhRI*{eR`9<%0%ztJ F^ڻ9sCtdG8xk=3\};UvƗII^ɨef@@9!P3!V#-f\%F>mGaj'}39 Qx%AZd h6!YR8ȁl2V3 W4#"M&3en giL6)i}$lg9Z YCJS6Gzx )\ &J%1󨙵JA>s*[Qe"J &Pފ~,:$]N :Aar1-  (AEertۃhiZՠVXNj΋^>I??g]g فI\''hõr1)n=hha8iy{ӗJĉBJOjم# Jz N{ X.&xI2H.eDHyjЋ_5aY:TA'sYm JEɶD#}ze2LeT>Q<`l{(}juo1 z(kp㿩I6i6@ԟl b0P?lFW LHy5N&XgL5^D#y> Đ%T\IHZ"*.iT,Jr?Ђ$m J{pQsB<0 %̱,?wV_f_Fο[yr{9n|M{~O+ 0O?VtWi1MLꦯz?>!ux#9K%Y"d+R֪%ehv>Kz}&'!90Lೝzʋ71ÄC7a<6޿j߮w[1L1ϙFoLaJ,$*{>!yE.A)#G(ܿ=cv\!+,mtj.%PIs$Y0QPuh~U`o:jxZAw BꈚeZA-wE66Wvq4y8?\·R-?=f =9ō 1ԳƐnjaqc3Pv鞳'|79f%C쨺XnnڂݶXj}#+u"BeLnZVg "wYSWiAg+~.J]5?Ƨl{5N)>>"`/]9q=\Wn3E%OR vV?Gy>06- R\vf W}9DF>]f6.!* #V%OU2Xs {N0F0/VXǍٽStQ];1ϲVqisHZY]Rzg"c'G~@Xecy{~xzZ|Z,ӇoڪMs;//,Sy/fA 8 :<;5.s=Fn5{8mn"kl;:^кHg&g],.Jt>h]vdݩ-!j$ oWNcYɔ)NΊ u SfrHWY]SJ*oPc%.[Jh0ZQ%,`V N+e+>jq"iNTƅU=f*\oc$(/ ~3 S\{a)qXl U ͚:TYD ,7$4Ho2OAZbN&Ɂ9$K 8-=aKip8xCyi.^%>*lZЖ4LD`[Ө @Wi:ZR2~$SޏF9YӯOUd=@k?:ݰaL76-O:U.t[qRmVvC=u3JHu&H%%sHEvyߣ5'^!oYXsS"nczQoG=H_}s (k@Yj7{+>̿2е;HLHK`:ҊSBV僔nNfY0M5h6.o9XavT/Ѻ˶GV3D%K}0zgrVD:*~1iLNJs@ȌɬV]Rr]H"d)}9fd*2p3k {_9\{i8/͒nxZ?E -oK3t@U[Y^p3Cwm_94hٗ|H uE ccTHrq;{IQ>L.%> "K:3Q4hbpjFpЉ;EgXBiE0[lI,3"djG#u oh ;E `+ R0,F&y' -~1!'}!]uE繡j߇; P>C3«Sq<2hej4cjU޼yx̬FJ, ԂB+;"(`HE4X{?/df/Ru }Q :2|G?//hYWrh5y#~Xk}tu@:,> -U)Hä!57:Eb xC >~Z9;8ųejBО<.6qQfYF8k0/Lu,F7 $:9)xU9"l2.H2=p)!(|nza8'QaEy]Xo>|TG޼7?_~(^?~;78s>e$ݙFށG]ϛw-XY\îZ9Ƹ0﫟ozcs.β6'!M.F|4{TfڲHmWo\iK'D\)lmXX_ۦ/CB4{bs=4P Axh/VV?礼7Cʍ$bZ$q,KJq넗71x*@EtFs{fڥnC8xzNBR2^iH0:)FJJJo.*RwSP-/ԗW.G;Ylz4s$_z%1 rc,STr oqf̓;a?ή5Ou;KaQH "hai)<15S'&HEeZ~XGd̐ړ0:%JJMd3BDN H_61&.guI% j 7#D >HςBS(t"895hz4!Zl-JJD ۰GV pQ#RYCAjH%A`Y{O@R뢚QjP#9 [9sq,vUH" 6*^3*ta1RrRVNttӜuFƧO>=I|j?{_5!4չ"לJʪU^:,6OE #UD3EDY5OӐ=1β'!(fJhmcBK.Ez톃`b.jmi>>:A<KW9ViNEm(j+-[4-UPwZP@ != uM#L{1>ĺD@I:*k bׇQ~z ߠw(FF4ր(+f,(9ƒ lN訷$2c\Qh4m(PL򠀥g\@%xޣ%PRY⽥Y?]D\jzq<\:$_g1.V/zuzӋ8pSO!$iIt@4h _kɊԝ^܇^<}X;Շ3l{мV0(0UAp]KE?JBsy49s2%gNVCϙT]1gI|q7<_^Mt tHr'IY٠) zXUwoGgU/ N7^?h^&vߍc[Վ|%]M!xj@\(Ҫod4.XjgINztTvI&um:#+4@I;SU)/(% z啲AqeT2΅H©I$ <#:j>1*,F騘R=BBAn[:șW\^qYy}qtN=b)f`zzT/.G ЃOܯ7` FT@njR- "I5# Jk.Z洐|iaƒ@tTlpd\/0ǃ2"ALjMU &eF2 ,Z#msnaX /뎜hxs{HQo_3 wӫQS"l=ǰ>S3t?Վ\,yNw'|6dtR#1GQ`:JOBۑSFTG[QT"FFC4R9XV[\.it*PHnQ'#}^Y_[iYy;ZMA,+:e,KA܅,h^r!Wpz6^O |Uیu^C+"?#|Oh8i_EpghMYE3OɅ1w~ԀG n&# X>2T >皫}>C-kJ_ q>)6VEmk'U -1(<1(<1(<1dRT\*ecQL>W@0xHZFqh:teX7ZS 1PƹCR9㑠`*Dfі-9MGSdC+N-g rfq>kp{8 . ;#XZד^ZI7/n\׃"Ih$>UA\BϞpQp wq#מ$x'~i\f!?*%hP+ I} ۇ`؂MT|}Lm0 Q5Av೔;{WW;샩M/)'Ew?+@"P%Z3, +vXN3q5jw^!]w]9Ӎ"wYNiT%6gIVY3֭~{`A~%(H9w5X#kcC1Ʀ?{m2Kl-M_!ujtJC܆vuCl2N$r s2cwkN_ߎ%6nZ)t~Me5ZBY%-==ӡGi Jc:@!sPtD)(}N.Ṁ^%$/ޞӯ<}0Y};Qx[r䳈.#ZeBh-o9V@S4sQw^ i^+0] 4,tHDf[DkW)`EXp˵AC釃H5A#nOPg^7c6ѦՍ͐J4⦈ֆÁ C); 99v0}pQlCY.DNiPX D, Ukv>[D R)wM^YЋt)g@H3CerI%g4؂0ʑ#Vr:vCBo?quӵW2w;CH@sSp\ʔ'vNiAt  l IvT[!sJ;m΂HHTLJlRhyHc>MΎQ]RʱNXO m(Dym &2~R٤]!JM:m#/D5Y)LƃIlim$Z.e붵~~/AgAӋi^X=jZ)Ч]095>{ӠIՉ.'a>S\OSs%#]vBIt&Πjl oY*tP/l8kEɴ|޿ e/KieFOX5V cgMΗ@fhIų#[[^|9; T!-8  +e)V(Uٶ4Vi ɶSq,w_YlgGI1gL* P42Z͆:FstlsBP8еJ% 4ɦђm[ek=(Cp"=jd;MΎv:@K)&%- s4U\Fc'`͙O+/T(M`LxZ;pz/֦L`N W \C~n>rc"3- ֫7;B܊RY^gղi#0e2dn&iFZ dm#elvn""27|:mf|;ӼUK6.D\k;~I ;IdB7xq5qu{)1yu葭Cc'؞/Di4yoI ِ0o6hDej:*KR,WZʲZ%J?**QGe𨬓ƫpYX.V+Vkb>J+5Ne}wZ\W<,^}}ЍЫDӻ/N&T]hڨLәletf<˗G~+P$x] I6ZS܍o,~]\o?nQ笿!k Exvl$ةz,*YQcU:16ZI+ V[JbzY:H%U#6BԴf=eՖDUxՋi^U+,W,}-bƖ+Vi +)]EbW+\ZUXV9o}E;v>&M"iq5LT͆ԅmpg\=aE`0/d`$\ZJ f\!ׄ+iRP X.B-bxU=jqZT+H_ XԥU*qBkns.$$:?_-A -{L-Ǜa"}ԍ~&P]E<:U⫟i/n﷽ʑؑ IVFh6muc"9ci|p9>787oظllww~T挆~fQOo~9K?]/NG>t*+G 98:7pruszJ HCn8QHm~ve_o|?uŏ+L<%g[S7wq~GoRg[#<}3;.>\pK-;;Gbv޷Lm,K[glcf;Y]%s = vWN~r^?UcgWm]5M܊wIڅTLj:@M`O2 h7 !kB?W2ow#L`SY|osRbv)-'5:w~^|ۆ.@W[2<1n6,oB}y}sޚ'5n=Gxzq "Fl//κ/,]^{ $!)\\N?3.ޞ^\пmM5_):yߑlThm ޷]>g%h5Szvlukt4wC|d1Y6 5jlsKl:Yn9s'?2:݊& N2Df_)k -` l d k -Z-J-ϡ -g]Eb,U% U 9jqe3Ұ\\Ywjx\J=G\9/pE93UxWVUy1^ɻbN]jZ-b1$ $؂W䊉OS;a*Ff̌v+ ֮\\kut\IмskTTW,שZpEj=+V)Ō= !Z `_wErԥ?-1*K;5Ep+I-WN"L~Vr\Y#VubNθC\yiA"t5b -D;S0q5Lz18H.NDt~>Bv쌫vP2W,ZpEjq*qIOl\Uwj;+q*]#h+ 0Ղ+5LqB`DE"ϾqD\\RjՋJ XlHXHq,ƁCc݃vzbJ kҫ C=(BĠqRHƦsj l-4󈭧@˰Nr ACQShz"$W%j-=*͡ -ݗ W,C52f[ t\JW{+5XW$XzŰ\TQ:X53WN[j \ܩR+;q*K+7Ep X;)j6cW͋׃+c׻C0~#] :v5LWԚ{ Si ;͸zl r +|-bJ4bJZi\M / W,T]ZT:P3W:alE"rj+V;UnA*W{+t V+lm5bւ+RxU 3jqmՓ 5>T%9+@n"L`ǫdShUJ3+(\ک SE>"I lt5bVՂ+Ve"E>yγRH|{0\ZmJ}ĕڂW,}5bFׂ+VX3^ ];":Hfb\ ;5yWCTz(퉨+?]/$*Q:X%W{+h ?D *Y X.WRV`Ջ  ˕\Z,~1*q҂T+Y zbW֩q*S\I_n~pV}PQIi?Rȭ(%2Rg\W%Z GE[7׫Y솆Ϻ z#}w=|7~>X|כ#1ʑF.E}.u_ϕ 'c;{BWoG_~us^{} cNjUW׋rpto!przE)MBq?.3]/pK(:8GHC;._?%9+ikkг=w@eWiM?gjw;]p-1mc/{WW\0Kb! H`yH2/Y~ʖ#q9lɲrϪt,u>$Ny.;׾ji7纞f;%9kյ뾓˶e="h-'mdeUl:=?q;޼Sɤ[ũe=tgmGFQ\Yz(OW ZU'}Yգ&cs˪wk&\BWsYBlzj}l>jt%*Y~M ݊a[6Rq ң^ cѳ.lo%gYȖhyeܯé;:@^<'j^fӴK:A\O>.|}'Gco^|5o[㾞ޏ{̈́&j^c\uk⧑ڼz6߾0Uo_q'^ >ܣ>5 gwoeag5x]̔ 'O/34.Gz{%g ~|_._g+#l%N>rB%9Ϧt*z>f2;<c|~dcćDhpEo~798>}/TI4|KQp`J¶yW|Fؑ$k5IQ3\OJH(v6-@Bʕf+VB.ƅ\';KMb2짅7\4Is(m'd) 뭢١C"Z"%{j%RaH\'fhGɉsC~-6rJ yץo!bi ڊe"\C: p\N.wcFJ-FKAh^01ZOfh!' jhΑӨ%gC ~>hB0~}ϮL- 9k Fm 5bxnl4Xj@cV 6F  mR ߚ  <Dn@#LrO$7YM61\VatTOd %Sc).Uh'!Tr>=7!МUe8o[ B9ӑJj6S4{HT5TcPcwɇ5 nNϽ)eʝ]=I1c##[$33dTТ!._*X )%$ښ.@_:DHp,}VR K -b>'i^\+GgZȑ*Y3&ͧ*ܜYB Qd|n` :`=KE Ȏў;udVh o'0LwdMȗBф ix l^j`](/xF=J: o0O#4羬8X Auٕ**6A2h S±e60|-K`17ul0a6tqLixu Ύ,e0W. HJYl(̛o`5h[5( d{r9Vq#Z}A)]'FCWH%]4ȜM̆eBGkl}ĕ.G]+-"wC(Mj&6n|#ƒیd8B@FޭdWb!_9fH57]Ґ?ā;Ķo%Q.2ePBdBEK 4tUk*uk)6 AbΚi itt01uKs5tPk L\t͇*,mLygAGZ(SѝED]`#)#-EUPn=Ly'D)J6􅲎o eS]T%̈j{ 1kyjd0+=d$$}`Bj(.fdY>! Dbtnix:{zjE}l∽}Ae!:3 0srm/ĥi&C͊1K=: !pBK;vۙxพ7T=˚鼣5T ުUz < fL{T^Z$ݦtd9rr%@ۈAhͪ2AS4<#y䄰 V/S( {򠷒!HDNkV2],.-u!\(^b )3G[x+$@f6 ;USYߚFbNLw6dա.!H b!җa~x/濚wyv%R)\+`%V -XFX;K $Ynqi"! 5y]%@_!80.#(>#_ IMfjhɾ, +Ϸs|vm\?+,>؍`MXΞf מF`.}˿[B{gS bj\a &DvԌ<R 2j31@99ѐʌ4M\AI'? 9XZN!F ":N'%\]0ݹ"SAC L:[SAPJyxHsXAշ۬7al쫀'V$ɵN96ȡNBa~GA"o Q0bpto(eQE]NPGb$U7Đ<Ru`\3cMica j hN-1\1sqJm~)v R'Flߙ@YT;*6R{Ok.9rz&ݮAц%lǣ6Ƞ>O?yцr zQ#Iڜ+_~Wso>o%Z_@|:ͨ泳o>?u^|'WW_܎dm>_^^_]=1C"ч|v6nh[[ k@gc‰GtW{xENz[p}LkqmN Lzӭ4. 3p15fbuD29G|Z:s =u@JPR:(u@JPR:(u@JPR:(u@JPR:(u@JPR:(u@JPR:Eɬ XzPڸg!MwΗRᜅ*C@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': Nu8`OqMq5N ;2u-kR': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uHf&'ǰ'LR\H;Ҳ: 9ڠN uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R'8Z1`>᧷slf{}fCOݵvy{&/(d\r3f=%%к%sƥ/~VѮ:psD9)] ]`Êjn0pBWݻvjN*O,+ 8<[f5whޫ],RT Ni=kW)$cZjfj=~CWȡ&Zat`OϭϼԾBm튶+RzSNqEtEjWCWn4k+߻t~((] ]ٮI] f5t5FZK匿'tgX]Mɭ &ZNW%;+0iEtd֣&\ kD)Z "] g>+>M%Qav;؞7cqQ]N\I٩$j`m)DR?RT\Z(G?.63K@#F|y4ۜT#B Oh.-(_?o΀C`7ɴ()}J[ *fޡln`dqa>3TP|O/l/8kQB y\"\ɋ41_hbab/fzWd/*Jǧ`?a2?ޛLG.7Q' .>|dUWCS]~+ a&Ow"Eoi g6ΜI"k-k^KdW^},D. b\LӓU<ϼorKZ,YO|A-mNrie)YR˖<\=ieVWɤ%w-id /̊hr,2Enc_k:@ j*8 ɋB\[8o(VfB ]-Jh!W&Oeٴ! 2W0̮͘z3̭}Z r D4Wp +dspo&-&sIU沖=o\}KY|aaތ7[3&UncsoK>|?.Q?ŠS*HawfY|pE1J>Z櫊Uu/?n*ǻ_ǵ$웟DTPĂ3*`Bϕr1}?kOyv3oȿ7ueyzJe![[ ~!e;|{:}+e\Yk2_p3mStg?Wᆱ%I1?Gbܱ85RS<<zHsTrkB ad4{G:왩Be~^N~9&t? lF߯Q6Vx يR]/z3!P?+RmIv>ǮugѹgN&gp6,^"L5;i5@>#[E!]qtn; F93l"",Ԭ(% TWhڡtAM鏦+U1ZT rqFY.%j?^,9-AkLSzAgU)EN藎o\DŽΙ@VlyT#  ƥ\#iǃ4D *듎-e_׭krZ>a;*;)o{x7%⧽_ͷC[m+'t~"@vs{Wںs=s @)wvD5 YBG>$$D# _sZ`EȽ,}~@yO.= kpX-%3¤)0e`Ir`9)cJZ+֐mU S7/Y#'v=9KtZ# ,pD"8 aG8z]ppوhȧDLAS勨4|1G\*@rql&(Bq, qTMTĉ3[s!0 ZPJI3oϐ^+nQDy+'݅j6АfPH` ^9@^|;7PܾzV[4v_IܾZӸEGvBG6y<a݇cps.jխb1X(@MJt5 U!$1AT&$p42!#Ysҷ;gg}/GptIx$HZ.(@>ͳAjl٩8sBFI'Jd26rHu m4;Ӳ"Y_fs_g;ݍL,_߼!^&jeDupZƩFSĪ|lr[J2F)ܪ \p\QfZuڡWJB`PI/~5*f;YZ;e=jcj|R dM{}JJAs)2>uZE`hd}9m%G8 wߚsgGטH8ubgkrp:vd\Z'h"PtGJ4xg5>wȏhC,/sۻ;}olqp% tB{C-jL6MbY@> B=` T mS6zpfV:@WT\ ^R^^u)E%Up+Hؐl<\ 8QGi ׀(zXutߧ]v@Z$@;GZa*/H?}4gE@qu +B>5``fTϓZtS+$u^VDϓLsɃQ"xIEDP@9ttPTp(Irⴐݞʂ@?@Xd@.4SQ΅gSG'@T3v?E ]X>*xB|^ Us_⟩Vj8==t' z4bTMzy6-tVLMV6"$Dj$)39̞L ] oF@ K^(!E,pu[$LٴH]S ;"X\ DK˭4q R83vZT][Wϝǵ.;* X_N}X| @CG[3W«Fe|hպ QQ8 ɡZ,Z#zZ3h=XNN_,tҵ`4F5Odr@86ZYu ')jDsOp9ٕR;y?gӉ{^ q>0p?^K{Z~yK{:HD)zTqC8asR] Tx40AtQ7XgY_u0 ߎaIN Jͭ0Ad h ٘™$ Ć C Hvswjrp8KzB$/[ߛ_drǩxLu;#xkw_xe'< fD˪RK"MURh ȚeI`x1<2 koMf?J4Mڑ<.IޫčIFYn2rv Xl`kcK^I{~n,;nI)[@dG&z-ϓR4^#Imj8UYRL[?2 I%"{fɊ>ޓv p963n$?n0`dfVԊΰMXoz2vҲ4]鹽Y2P th)-_f"JM3AHaJ+ DɭxCTLx **"K俿B2^p0=`O萔BEǍMɕVC!A#h 'IJCN ?"B/sOI^]>Q;QXIo^\6 څ;/vBo\x2ow֊^t+"8~{u[>(2ɲPP[/ A` 6opFczgB0eNx@DD.yt$!ȌV@:Y>5Lt 6F rBpV;E!!W\P&$(@aô9kϞKgre#cn!h)UJ⚫3ݻ]y KrG{Jg:0v(B" NH Dk[SQ ՑɅ kU1żZ=v:" r: %ຘ<O8pWcoKfqoҞ>bLUC+w58mT+Z^Tp:d%aG賒Sd/d(zvQWYt|Q 6N(OܒnpT?fiMGydC nSoě6BiѼwf5RyUV¹aȯ ?E6B=q#Ooͨ&# ԧ3ZWmZ6!+ ﯦg/N ohCsa|[nnv!8R GßMd; g$<}p$t7 Fa#.F> )67DOn\ KN-FF=dߨ},u[#M 'Gcj,6#Zx?SతSq5;QUqw` нju\{W8@uU\Wݨ!Grcu؜A_=ų ;O;x_{nYpS_"E:RH5xbjL`s*m=`vy$ 3K.X\Zø#mT F Jѓ %&2` ܙ!N Hv. WEXݪ?:Rq{8Pu}eztTUwInd{3{ue8`⠽8#{m;q: ]ˍz/: d}?ıG,!3_\/sneYzmY"kkgB)Wg,j]\.N%jFƩa\ 4&mjJFD0l٨T3وg҇d6| (m%Mѓ(QB,MI4$jl.)0(Ebh18o3 RHF%%YíN*d݋b䬹+ڂL}~p:iv{[#;A#f5.V{Zyҽ;pDR `^oggjvE3-~ףiӧ {t%,="֢gK T>eo9W[maY7U+veMϷKu=/\hIPc3_CZ/]6޺`E{Th epᩗY{麩: bgHדc臙tȞv_7#a۾ hцg&|W*Ge$ j "Zǃr{'.tZ̠we߈.;:%N;96WL3@]%-0g>!$%QWಗv{D.>>!0ש|0"i(]JL j uV|jd|⚸@9J(]2,WE4eb\pJ'Sh_x#gBMuzfiߏ;Bp1>0[x*)-;ara ~qpQgMRYCAjH%EPY{O@R+J#9 {EOkB2]`H% TiXՒRg;>Lsh'M5y͛ZVx&e_ "H, Jlywq&H 8 A1TBߎhF&[.FjFðb.Jmz 9! XD( _[9J( Jl@Ä+(zhR@/PքH<h8ƌC )}A:*k b䬖SߕȂqW$b1H b[c gT2'Ι 5^\f3^ޒȌqE%ѹeT,e9(_"G=zH;.J >>+,Z"^E|\ڥXg1.T.rA..J=q` 8%ѠR48N4X-c'+R6bܱ<,FDX]9.K;d?P\me?v"N?VoQiM-HM^ N4?=n'趀(O =.R)4ŽԬz쬯/ǣ7ũRw0yr @$SB nÁDVHu mͣ;&oqd׀+9㥞I3F{@h_eP^uKq09RA0LJa O1\s#SM.t.DG[JÒdy9hɾkx2@d\\to̺Ǟq CqVf5Ӛ`u& ?j<4Oi&=dӓ%O$O ȓv2H,@2jARW^))PFUl1!(\Xύ$!JʘT&H@xFzcȩ>FfJDI}R})L.mAB-@C-jG7cN2igruC}׳XkSnR-4#D93ܐ{bJYP)J+JyG.ȕuSEWy P%gy4ZdLRqƸT mDEG`thmc:yyQ<"rĸYzIo$$y ]}0VY33=kP`92\;zyGfŨQI,U + Wcc|J8s03'afz3:RsǜT<'h7kO92 &+n#ݮmĚf)5+`y"S< ART]{r`9)0/횃=O"2MuRt1YA(].Tpcpz poN`rPpC]H+.lu$yWE8*)Y]YsU0njDP/> ,dU׋:|m"K;˲Nx'}  mppiIe0:,4(|5t/i4SR&8l~^rJ/FQ P߮v[m(,xȞֱp۳}G-"vRu8v k q.m @qa+C@w∐U %U$JJ<`pE&X[iTYx+wOhOI u, Q1ȔM"UgsF0`s50Jw\*Ņʑ .Ef:*C+ 9N{ޙ8CW#?N{XiVmf/Gy#~J5Z'i>=|%J"bb6.fp,RT%™1IVmˆǙ=qL' |e׃URtK8[22JNX`PnIE3$qZqfgA]ί&K<*]~|:e eIcl/ U!N~ @MsZ ǫW S/ƸS^/]VI*M@fOf-uu@&6hA듗{)ɿA:*S !+`9㙕)D1$8P(x'z6MJ',$K^GfiEuxDtÎ9&hbrbPiﵶ>ȸ C< Ŵ =Q/mnj1 f`'7c,_fLVrI AGSYpjdzӍO_bzZ]OyޛaF'c2Nh!f#(62Dibbh28銦`Xޱ=pg9޼+aNN 3"y$\N* KQwf'y^1X7E? I!;K]8bM]ЅjV߹OXHI2}P<*l_IPL̰`EwOsmh'RS@rh4Hq6b>wqΒ6zwvsݹm_v ?#%]xzbSYe8.۲7Coi?*Y5|q8\lf29j5`Cܴ@e7n>I}~8/~1M_}nT)&.{{C?j{,9vߣ0Oov/3}hfpӽ[>ru5UJ-ɤWoj.ud6Y_-"-ݒI7>A6kj>Z-_~U[BKiRtٶ懲=`j+Ҟ|7I 0^4ni"E/FKՖԴڭޫ?;:6q?mz,>vP]ҡ}FuO֝ngk]LPNC/_x"BTZ(LTs)I_JUd2PAh4ӑ9U?z6~Ϧ_.x0.d0f`M,hKuLK! _"h% Jne:qzEC>fi~=yl3ؘeX$~pjR?i.N!. sDWY$QdFAS p'xۭ ;K+ӳ=6Rh =z%HUOyG\ek iֽXKm(v3xz,G/c2˟^iz)|ӏ?KZ+.. HsZ2$IVo )PƇO 0O 0jHBDQē/mЦ&ko1t?/yZt.R}CӠًQs?-wi /F,jyu0/wl|? bz %/]Z=8 mܒ_:o/OM,|+ah =l&\Czey()HSFkdR@gyFl:g ˈ`'e-cvSqƽ=&)W#o?X"f^m\?ĴymIL_)3UL'tl񣂫f.܊\qr2 I m=F`_i#r ,̌2)GsVJPhf rـ*;bRr)i2mJ>b&sBetVymocL=\ >Fmv+5ٻ8n$UgqGE09'nXl ,24Όhݯ,C(kd]_uU5 Lm |ܓHɗ{Ƭ/9fvK+|/q<ٺ{SoEJ]w#?11/X%׫ :Viu5?((ػ7ݵ|}qҦ7/Yx~O<B>;dXx=W͹swki6>lA ڡ"mOk4Z@pExf/jyUY@&)jˆ:E;@/k84Ⱦlr^BR P %P9 б@霓JYc(mP2@֐}>b<2JX"%%ѸXd6s&1Ϭ&ck|.>D$_?EHh*z%Ҳw#oJ2e,$΁PD.5fw^!5{JZNذ]kZ;FȮ?B"dIƢ1BNIzIֵnEFѩhɎT_G*>* 킀L:Hi VR5c3rkvJVKg]xVxz[']t}tlklg5F'H:&,k-/#c1z"&Vck 2TH ҪMm',EmIkf|B7 hfq_-=xRJ) D*&dZgQ;:-Ve%k` -hIm,,Q!!g(QtYPP.YvYGԆ\ 3X6#g>AQUǡhD9hA#xVԭbd5 l^ָ9m u(SYyvgjmٰ@bQjBPH0dRQg#iRBg5b3rkN2?zq\:Xg3.^ԍwvGe^t{)d"%DI3 ǟ\PJ %]^| x(wWBc}x͏w?U$!ј=]g?YOiύ_=1agSN.d:f#$'c5x|(&EmMqKz1bzxg.O׿C1d- L\tr(t{ UTNgH$\oo>ǶWT/sK[J~sʀ-Q KY( T$N{" СL .e |Ypzd; vw.v0Oajq{*# [t|(!I:[u:oCpP`ӽpUMચ<{Ѳkzf~<ޫ~CE"ɺnn%Y`ς]beuAhXvp*DM#ƤSQR4ԭAGB-eeⳎҧ5&#^_[نxjWu:ͫ_nt&O'J'zmoGFw>oy2jL`6tm}E'dw,'~Xjг9g˨YW7mmzY 界ԉ ϟΧ}9ˣ2jHE}0(h1e`(/N7_|߾㯾yw,;~_o߽-x9H`~||0|wj55ϷZ;=;w6A=Ci߿YE8W @}pmE!RׄF Slgg<[[>%Am y*ĩ Zl/pߖM}1io⋝h^1zܞH HNa" $)VD%5C4)PH>S&{'i(e?`OlX)b{riaUE cI*i\T+hAK>P*;&tfy~yq(5x5q x@9v'ܺv&m;aONX',{ƯѭĭN5HA֘B H쀽αL4daٿC%g{&I&36KD#ԥaH.JU0LhC>ZVӢİ-t뒈onٛna _ 57O'*>]M~ Y< 1NfC{@TF@BbZ*!-'/MYOf/>0\&1%fKʕ$bc5C1*IԌ nT瞭|};͘bQ۾W iR,^E7EyYNy]Ӊ9iQ&@S] Qu~U? hΠa>=i=C$.JJE%(hࣈfgG4&JL6!i2bLĂ@&)d9BouVwnkC|ƻsܱuOӿKl nunl\A\\\\/  |!`' |!`' |!`' dydi5_9㺘 KTǜΧT@0ѳA‹/#oeOD3 X.a÷_?ZoU/QōhH\Y:(ސGb,xl/U|Y/z6IkS|^?:7bs놥WoF_-oF`VgGJ;dPxћ-TSڨ ~[17/kRٯu\;~,=|u;G{ݿ[xx2,r \AˎPb1DrwU& !w3 M:Im$DQۯ!Dhp 1U+ Wm$g-E&@6V>EF&^(XșIָ9qo8]hM»mDl; @EJ"Iez3¦,At%%25  1P!!zӠAͧr5=L3'eX&D!G{!:7ǂJj"x2:#[AIƦyKoESW ֺ`^k'8[GN4 :ղFeOQd3燍FY6RCp3(Y0 s]BSҩ%ZGڝhXk!q@ 瑔%ISh "»*6@"'r-mDd"٨SBf*80xvJV39r.P6f| Y2{ہ,β# XkBoM27OPBo[" HY%;8KXp%7{!%C¬&봉e] s"[#~Ycv BK@`QKj2 B̩8i auJ&n`ouBʀq 2Hй8>LΥ$cV9 hTwD<^TIݎM ==LDMD> I]R;Z[6/e >v^-5.0Kzt 0VhGgvC7Aٱz;CzPGm!.4wS -iCK8+(v%Yo(>Ai/^̅&P^ ([y !@AJD<8QZjHiKg(c%g}蒓C;tgA'əGH~q @R:ʾ;WHbPuȜo2oCg~K~u@- &R2Nu4'V7&opKIT(ں5`00pׇ`We^xsԵM_.CU?tƭ1K]g]_)vEbn. 1zś9 E٤ vQ x9\ H`B;W&YɁ(K5"uwteGMc;0T|΋Qoq8)8 WzOɴsB&{xX^F_B΃ΔjtmI+Ժ<wQ Ce],r1NT|>b㲇?*(Vsmb4?V\Am7/Gty-Tl ZgB=`ɍJ{HJ  Yú(MNre)YRmgG}8j -ªGY=zemt]p򅁮eg#BZZk+av:-8­OB^xmM% o߾:!o|ӏ ?>^'bXswN\$$xoǭÝÁqDzQq& gytSegXag;_8ޜ_pT?~CDgih3t ^!/_^y_mh}?U~[9&~\qx}6'/jqU]̮'im|OpwT$4{ݏ9: GJcq՜vyR4{ǀi"?Y!r4="L(N.bxiae $L~2>L0> 2)=sDxuPTp(IFBvDBP ,2bc IOgĩL(3⩣]O9@:3vO"A ]XC:U9a i]K'{+-U=vOXFKb]GRD,sYY\\t6ѦݴI ܄<'Zm,I g8@qE.vQa 5Ennd !V; Txs5a@k\ dhi9ABgvxj-ϽK7 cB3xKCաWP*CsW«J=ՖŬ QQ8JɡZ,Z#@9d,{G8Μy|Ϡ{1Ҩy$k#9$4JK?NQ# 0{jc8\9Bq¹~ n1)l>n3paʽgu1"KgRŝF" qIiwPIuE؋EȲ|xU{1 1ɩrAi{F1`9x1Tx' !FBG[ iWr%"[erUGl&Sh_Z6<^ ¡ԒGj{@h8[ O7g<|m()* t"C@|̭R$7@7h *xy!zK(,gNi&8~ek2,*¬IVh% u}dTxp"8(63X?$'Bt2z'#{A/uaij7ַܗWn E.=7kl7G>JmA+y yFxo S, *YF:^џ2 1l䵅,Vd7jV+m۴8yjP+],.])m$=d[zq{8 _k0͎r}_9_WΎjĨD뒆1ۅ@X\q(L\\</ye+:ܚYxTA6"v ߞL%`<B.1m7S{E"w%!/VR-S`;RPwjBL8gB˜PNa?o3Bs ha؎ui(v'ApqSgC#ϓwY Ϳ-G1hpn<+<)F{fA\>ifG\:M:fw:vM{fɏ0lx@ͨ{Qb8&l lciE"톉;̈́'NK.6 ]7@k^x# 1'd0bt LtFħ2Fisa2# Є 10팝-g/] -mgNDRUqZ6Λ#STU֓ϗ'=[eF `N;KAx/PP\H ![) CudAA1Ρ+ɗd?X4qƓJ gJFLa 'mF+JU.bQ"5@qxhx=U+ʥ(֝([1>rfÑ!P76Z zџ8k9ZHx>dh$j#1^_E#KKN,802BQp22>e.I':ו14 t _)xWni?"l2.H2?xUFTI*F ȸ~ ߦykY<'Y䆉ü~"9|!uX|kzܺz1]0$wJ:Ӝ?c+yfAGI#FHC w=zTQ]4g/ ~%eސ!"IŽљNMgh/Д͵}ŀ'okD鸞]%V;,Aq4xCa }v/mqe+#R ܕ@n>*jC ˥Vp9:_dqk5stqrz4>+Ԍ.yk [mQT_P]zv|n=d"c6~`H1U,ߡ-0rBZf.ۦa:Ͳ|D q4>:҆hx;ӻ9نS##gd۬mʠ9f8ב*Ϥ"czDqrJrW(Bstݷ?~s~pwo ,5NOc@?SO-XWS|ASn.(O׿}3 >)?{68vY0fr?ܠ6$WMJX%Q1@Gȯꮮ*TH;-2JV5G-CdXAAHѦ 6O Ӧ}A!6 lF/Z2<sߚG" g#)S2)ÆG,I'XD=AcܻK/6%/F?`#R,1N舣(K*JNTZ&yb1(/ [`:ߋmT)np09sH@(:2 lXj\omMzi%k"&[6uY0LLr.PHx3j5ItHbaOFXu~B *3r]ȪnǾ`yI/> l)# >&e6tr}݇S&їy3y?~li6jn9ChsC-9B.\@a*)u S23]]o'+Yle3`iцDl#opyOή?gm! Ss̼17Q)%kL+grv+ga"G&Hr78F΅T;HEF4NJ@ AHRVa S띱Vc&ye`QbhnD%7wzsh]6jʬ fvFD3Jjs߻[ryT$ [gCivˣpGOo3hKYyIhe.1~=eid=6g%ݝwYzY}[Sٓ7jڝOl:}ys,΃lCˇoy7Ẍ_Z9[Xk{ҕjM9~O>'7CR/!yum͟|sBxPyDe4b(gB-D@"3&e[꥕Y1_˾]vDUye8ȽCNE82"e!8bPg8m(Ki9WλO qy`Db\1hP5lpQHY¬ՁFXw9N`.0 Di#,VJ\58j(e30KSZӓ=VIwY6lol#.?tNfTR:4VT N9ťo cLj[۳Uwؖf=T뱋doHY^l=xr3O?GW؊zFr:E40Vt2|Q0d @OYHTȠg[C.J1&p'Bt:(FaxbZ#ʸ,x`b^A 0CJ1UQ:HcZl©_OFlM?ˆx`āLc 0h9 sd0Z㴮刘bgP ZVQT-Q2zHL݋m* !Dяo=ĕlb6rU=훚P*x>7駻8R ft; R*x",DD05eDDL ` Xy$R&윿cS享wK\"YK]p4ihQfv6ke-e` 0GM,+&t)goCgs5 <{2O s"ȼ_0cÓ# ?Mg>I |H`F!狋e+f6BW@泻3S,= :6LŃPrF (@d68 b=8o*gK>(LV }fąpݾr‚Svr-Vf<\[gVi%ۭu*?*1W+?kEKIl2/ ^|}V-eh/{#_L@ QG_{3F]`CW .}uJA:C"0>YWVNW d}+@K:]%D tutE9R V7tE9ZNWRt+ƙ`}ļ7tJh:]%\tut1l=!C_MKUΗ%b[Y eKO槗ϲtQ7 wp(/+,S^7dͰdh\V.=O摃P)Y_9NB{jR #v2/FnVV\L1X݌ARH]MS9yr¸5q;47ůiSUEl~7)9=ϲ o+㒀Rn<߾hT)AmHywY R $g\Kr1sB@WgHWJ(.D J{CW .S}VHtut%AR0&7tUB+:OW ֮b E.dHnOt\}( |Pֶ:AWlb]+z,%ѪGt)ѽZκNW ]!]){DW ]\t_*%JJ:C^֘\BW -]RΐSXW . ]%TuJ:C8 Tc+W_b4T'PwhRɪL5 Ms}鄖t~.9ҴX>&?t*Jhu`@fwt%5üO;%7UBKy*dj3+G}XS @wJ:CZ^ś/q-Jpeo8Z-@)ꫡ+P#JSXnpՉ턖;& %YI]=AS#J ]%+@K:]%]5+ Gt+Jp5 ]Zq*$ds+Im}:8qj^_O'E\A w]Lgo&EA-uZN՟?\p464Yrm˃w4o&n+P,˽nsQ=FT.2ʾ1wys@fe,k`(& wKkQJ' �j|7!⠵0GAzT?lSJvwyymMVtjK Y1^G98 J^I`Q@bY}TR,EQSGhwbY@2{5ML6y--^%תB,4XA?*OD~/_w%o.JG-?ldUԽՍ_c]tdWg k4{0=s4okt1/U74xhȦ-fFSͯ4X!/c?Eќ&ۃGpl3\J5hCq XI yx.caz3K.O߮v$Rcjm̗0L WE12|!דGɊZAHvڭ(vه0w/x=^u[,s* aRf2)gP& tZE5Z;JrXO>Dyj.dƗKA~[\*Yw|? .¤UI? z`/3scKoVRgE_'8ߍ5x>w%zS_VRo Ofh8ƊeZ8TY-NOć9n wk{…>|>Tjq5is{+FSdؐ3'4*zGƃIif,$<5~2)"0HCׁkU~WЋ#B(^T!*r8;&LTU૛=鳶 贜ݗT5xr9,j% .-M߃ 㛫Q+)~Ig ')EZenCO>Mf%o}\A@KXI'AJRBqN9&!K,9)NFL.1HLLk89cwJg3ؑJ-/l;sܦE3>]R󲾶\x}I&iP`{9U`!)-,3FM\lQW@T[Z{> &CR)5AC2%ɶce1e;؝c0-µ;Î]6?r%!<(lDLyT@jS;yc%Skq^k5Y ybLc}A2iȊ ,FM#(zC*Zagl懵Q?q?aa;ÏCq { W N89 %嵊s o)2/9sS,iƨJ ɀȅτD* ٗhc$K`'v3p6sďH[/Ne|Q/x eL%R9jfO<&fFcx 0Yz!^(/ ? ;v1?XN#sE'{\=.BƗJJ>aĴpKO7oW Yvb0u- _- Im@e, }&I:s [&#ެ4yK[z5'\Z\/ŬÍx2[\ZWg(mz٭'gC=7=2 eN"I+0>Ld\FMT6u9WA컅%|kKfB{4aȄV.Ȅ}2Y5IN5`OQ[E &k eQj1{}ܭ&VN53N0mw!!B`*ې,H%B҆ H((M*cVS=!Je2{D.K.2XK-_ǢN(3$i<@k}gli3̹]b;. %Z& sbzzwMu,{/|0zwb[I̻L`gd]61dւ++[IKb͑(pTqp]2ݓUq4VeSF0FX2r])fk"! U() {(=EmVa="VĺUS$oX.RI?'H/R‘%1 IӐ\yLKo5J:qy,ܱR F+W6L} N|8_e0~P|Rg@~o-&'J ,'8+IN0g_wᇉv=Z?Ni5\tp{"vίN?dפ~a&S˪aSL\wFKe< ^dJ joIVy2b.p62M7nx&aaXKYD7D ?DEBOn_k&g,`GMrը*|IqFϣq/z:uaF??$TN?l^bOĎxW/~xs΅=_/߼zAOӦ* `ೡVi\;ڸǹJʷ?X.Brp+EJWrByvrAL{Ѣj#VtqeS Ċb2#u"t dfѾXk&d#͉6%wM{5 OHTԊ3P S %RDwa\z}%y4dVj >R Kf2i :]x}*"t:9әP`t;``{$d 0>pޤK$a; da;;g K"N,#.vY>QQ"1yeL'(#; s LZIG;^24y} RH{吢xޔdBZ9~}~7{pNS`}ukyaE /WVˤ;y;1>5'%w 0n.u9J/h IŘ-$Q{"UMSb.?Mp$$WRc|o4J1kiߣF#wK4Pq~12Te0'sVJ *XVcJkit]K|ڇl~run]wȵMF=Eh}ץvrB`пCE+j]Ͽ뻧N6t:o!z֯f!,ݮ[zmL=/\hM 9o[eCm:no6Y05gsԜMt۳Rn/y/> ?}=Pv͟6w m˞\.M?=y'o3-ED,;9"O@{Y>ȴ̂K0mp$I%ScF3FFHtCJq!Q:eƹ \g@E"Ykc!'6uiCυɑwsډW̫-5&K6Ρ3tLr!xԞR{ VE6 SہdqL0S%LLeMȕ )d[jOYU"X'L9XT\R+'T6d \1!| RAVC|4O Nwi,O-Ols|lTEfKz"|_O<;R%`*o)E3́ GGvH*\'D#mɁK B M gfKF KlHmyDɡ_g(rhķyм3k"IRI*mCzi[:h뵳rPI|$csIut!R: z=7V&Ud#\+%dp 6y`cڪ6{JW*]z7DH?{WH\Oz̏"Y\`_,r\do^,.,FZϧ(˶q[m`0u]**V]$@NVX3U1|O1'_sc61rbbsvE'q_ cT(M$"H=Gx`lvߔ؍(w^^ܺ8NQ*'CG^v# Ju`žvBKP2A>xG8zYpIJ*r:2*@-U!eL$-L[J? TŚMBudq ଠ92TTMyh&/>7{矻vbjwDl6w o~s4Ks{ƤBiJ+U&*-5A^٠'[hrDs힔w]1ߡYiI3F$jfAS䓀5@%-&c)9ݬȪF 6ؓt!&:VKATNgyI!otTg;JBgIa`T9 o85}=_%Ł`K4Y%Md++I*ދW瘆F)[*Mhc[J cD-;ZW<v$er"YT CW %Ύao!Ȥ)sh*ڠmH F-- MBLML&or*_\;}*jƢS>E.`+eCI" -URh Su*и[V[~q{:Ɖ!`JYV B'6`Q1\dgKlұ1y-6Vu](| 4_(hu sn{9I;6):ιYٽXä}ڵ)y:P [QH)/jtH.N-Iw-܈ќv-zę8T_qVB"Ur*`#m}OHpkk:`Gx!iesU"UDd'l@)!kv,W ,*d3) W)ZY'OX[S7GȌ]*d0q S$uBV4#r JY\E#i)0 \|XB5ʤ,O,SHMDTpZ,]`}5 -d}4> hSB9 *MoN.:qNM B6AFǙ6n|Ui8|ZƉ;:Y^=\L]rH*` k8eK'yKS-Gk+i3qD8WA3Teb$:`W2nj "feLd- ctщ1J>_u21@D`| {[Fv+]wʞ].>]UmϝVvxOۺO-qgNqD6$t1^}!M0(\V?^^ Z P0w1$jm}W} 8֧VZ5fкq]* ѥPJdV)5|OPq!a7'z5=jQb{Z\$=Lgxvquq7Us`T\+q m/(P|>XK֚k;`t,“X6x2 axGr1[X@cTNUϖ'"aHiQNf0~^ ;^me;䌇YzH$G^3b*.qž+tѲRb5ZNը cx<^3YfyTpE )FM6ւ"kVY! zy@ɼ.?am.>Ӭ:nifZ)a=4|ӜۣF9et}cJx7&ۗUp@ָ b\.Qh7!#ll}nc3n6=e=b_jͯ-51%XtwLW =f)EXHvA7l]N/yӫ pv~ym '>Hy~ҥUxZ}z_smLݔw=b[!K=\ʅ__p5ģΰ  1DDcyG=9*qK(䀋eq-y*7Z/o\v$ϊvSerdBe ]6;0<&5)b,F 8&hUu h'7k30ףnY.>f'f6 Go\{7'M/T!Ȩq 髗't4G|WwK>ּuy8?!98S)A  *臭޹r\r^[ FnZo}t2[Mv̳mIؒ껙Wg>ݖnzBm@YF/evYJ'HB*j @*ˣr/G3DjoM nZZ.# \ 5XA*lTIGDդ >([ޚL[b[ pR͖k@~kY{7D# 42oo b4;:AW9kޤ58MJs:FGʸ*+LK9r?*V}yؗ{jv):P}g*4Z Z q)YZ@%-&c)9yLV52VY 3H)Db3kUDtǙx =p|f0qgR0gwb ifx +Y@A0%&\AP<)]YY YE@Wbò~=dҔhLEQR\*E[&R&Rbcm"`S.1Cʄ!T9[ՌE|.\$R:$Ҙ*W*K JT)Lձ۪@roZMn)t̽ssUd~(eS1`Q1\dgKlұ1yɭPxmڻb[;[l}hm= ^f-jKÁu@a7kI QH)ߊ/Ue.PFOp/YE00qCTqmTɍ畳͵P &ܼßBÅ6QDc`)FSIi"R$S=Af J 9V_㰱?u+}҅+H:}ªuٻ6v$W@$H`09O;Dz#)߷غXr$K[v'iI5ŮWu!:ĮX@:#2&),X2\ ΎCU JufLMHL(NAX!,Cᇎ1 %?6]uȴ3XRBER&2!JIp!"CDyrĆ]mpd0;vQAN좶vEmgݣv {&<3)0Ǻw_ &n+x֊W*3;|9^SM`ȩqٙC6)yiHt|ބaQ0 o BUv%d*D,d'9S4Gdt׶FC"[}A )eUF' zcAVO^*vEaW>Eol'/7%-ވ"hӣ+ܽQ5/EJT)VDg.̭d}BO3KD|diJ- c%4Qgχ Ԋq ~~RE/О倲[IHY>joSaCH#ѫ fD޽FGtK/S$$,DA)6Yj=`D6 oWGDOԑg}Xw/Ń Dޖ-DN i,Go:$ ec ƚx/*c?cӬǗf d^ҴKa̹H|UBNqg|AY+r;6Z_ͻ XD 5ά05C6I3U(e暮Z-4S5K Bp"a);w&0{ԕmӻit{EM[oȾirW:pp`Ӛ655:z'tdz(;?m>O=tr_QP1R(˿ep(b|f4KE{eA0 !jG!t &6dd.YڢFs^"@%1x>v=3r#։*:M>5]qʪH겿V΋ʑ1L4G4EEp"zkɔVzN ɲt=o'»Do}]8ݡY^M4I3\r ;(rpP얗;7¨=iO"k>c5W.F `K haLA7^:ѸA6yqStE(Vt=jψ#kT̂Jȡu5A$jA*Tk( Ϙ $뀉,R9< $LĹdgٽj|l !Iv'#ɇۛKߗ 0t_jw&jzpoF*,#`W();հ z)uZ6>!ҁN5Z߬SyZ8]{]۸=ݟ[}E/ww5GB:lҍ-f~3.Umi"6ø&f,-cMQY5"5jl":;^k-Ӷ:K]6zCE/7ѧIkoa ^ʓpdFVgxq[BΔ%okK21Oyp8 =Q1\h명?6b46VwZ}~lgr;2ٓ^i]s:l}̭w1ʓ*/n#`[3<[<6KgM RIb' U7JNuњAh 4iOζ/)*D;Gd_Xu (k@YQl7'+>O3еil2CZwoI"wnI2r}ߖ/ $WKzfhI\4aGڏZ?fLfW]di2RS `PM $\ y2;L޵ߧߧ^d別N#JguU/$J:I%".έ֦$Z*(ZZ JSPD2HGD,QX$9D}*xEL&=Y6V '{YclpGNfH;G|u3O.fMìI"PMqXŲ69ggz2Q 9()Ӡ:A*JԖHƢE2tۍ\ H{ 2ː. jFANޅ5,x19(9B/r~x[B3xַ@mZW fWI÷*mo]VljJ`&AYE@y@+~V}/΢yR1>w߃A:i%sh{Z#}BDCadbn8F釲sFGN,'\&<_F8c~|=_Vlܚ]1hOQ'*-b䢝@LƸC0IT2 ,rQ &(҆dz !jJ\+) O9 iL44$lAliΞ~/w}gvwkux~e3=fyu+h]AYV#&Ej^2BYX& LZ&%yAXO7~:N٫zia5J&dŀ\GkElO MtIM O-I~KWO|gKu*[+ޘ `ϊ:butup}Nz3 IEOmhGf] "PԵgw .xg5eU€(LFZ,ȈQhUϴhD=9)vj2s!KH M %4TrXFevv74?}JAuNHh%h8}R~:x@C,/>M4*-ǵ].Z,L6c$(/'/NYd;$u/#eG'а,#Cb 4^+I'& BՋ2۾˔p;8`q[x.fd vw}};4fQgrI&Lt3}-WfuGt.wa`[Twu>s{Qw>v5VyJ"EKVLiWaO/be_}|\=jٟ2'ѲPgvX=u'St umЅBl3QCLmo[-HJN:N0!&F|VR"2] !ᣧIϲq :Y9XnM{/I`n#h_) ^'::?>EgWnDzFg10/cxY]D)hcY!5AI NRVɀ6*u@7x?XOz}by紐mZ %%e`DR'mGGVJ[= BfX4]^RylɓhϼN[Xv1$=;lOȚ3 $Iʈ\s(b}ā4UVȼLaψY!_kcB1O;@ T |!^[A:)'/.$çBT:?@)cJ654IR(P mɻJM"m+|0<c񳎳UY`ogyro(b#::Ⲟͪ'yG87Z.2N-G§qZD?@ѓ534+JB]<+k$ ?0z1c,%UX GgL6 Ky7V#+߷[\V9(Om4>4|ѩu[%Үh۟W[jQ.?jԒN>U|\ MWҦOl'|o0MCquMգZ!-W{?>ͯ ̾%F2~$fR޵E/mqs>/6Iwm. 0ؒ+q#YHDR|ѡ! 9[nc9 ?^NboPn,^!--IZ[pKmͰfhmfQY> q0 Ga88 xf?Y9dYꬓmit8ocOQ}4\8a}I{f?U T*Q'~i*=߿8Euɻ7ox~wN('o;I9_MDv-o~\iJ5͚6~ikoZ8mIL>UabHר^}D7 㞟V: 付/Bɒ/?mMsӮ?5h[>H}P$4C] kZ 2xxknѢ&V$IWR:el&OE4R>Gz͆T>n8y^[)UZ\sc$`Bi#%J&'\TLș^Ӊ ϧKmz.އL[vt:<`Vua;OC)rZOcT)VBS!*1T <6;,WSɷ*&+6ҮIKk6JP舦V()5khΠqND$4B=BG&0zθ^>yn?< 5ʻˇP4z,+\BMo ؇߽Է24^ Zl,C uOn9Bmzf%72٬פGlKe"f~^z^&fQ QޤZ{L'2T7/JdS* U@clҦdLig=4KciLID~'Q$)XRi@3I]R`@y)eiT6F)D;MM.x뜡DB0*Q(MnpR9& ogm)h0]]-5 +]rqh7ܷQ ΑG&Yy^Uי/NUOW^L:]:+yVTC3KW]l͞P SXr)]g=zѼrl]6j.G5yr} K=]35t=oдϛ)ʎ…^/9QYgE=n.Qj<%q~StȞOs7I̓ƈbߢj>ΆqYrt.(N`.Efi1mtv._8?rטGߟ޿JZMnl'%_8:dp&9u#v;ns?;zt8G/+~PX!"hkgir{f_37b0Ey{Sۯ^SI50{Qf3 c7yk`/!haػ^nD ?s8?ߣkAmhÞl3>jb>+a,HTVTYIA9Ža{k0n$KFuJmwrmfSh 虏DqITUvK0@>>nWD>B4xm.%jGU:+}5lb2>qM\ bqW_I%ND/FR3ی"i  2O}D1.@)E/~>/ VԂ[E%=˔D.q7a@wś{(qQMtLzW#g}u1~-Ύb0~\n̓3W0"ALjMhtL&ʌdXGhOon-hr>_dzCg'[S]jI9UQyYG -U)Hä!57:Eb@Ktա?${A]Ó'̠\'ra$;z_L{G iJXFVLk^%rֱ$pv`:7fOܑMf=+E4l?򤝁 RKw JyF)+ (lq.@N%eL*$ <#:j>1eJ*(-VJV]'&$>qkvf^޲sd}zp6;bJfX@/D@I[Oq-;OD0TP0rN z4{LeN i-?0LxP] ˜ ΁ e-ĝ>8%c*ssW]wYMӦecz]Njk&CBU4@e<>  Qx KGNRmE igRi6r:᭶&H]Ҟ-U!HnqR%OJGg ~qfLmioz$j&>EWtxKHQ`8fq1K {O☥#q|$|W(0~0pťP*Ktp *'sv@pVC}Tu+BiUr1ZWE~]u>רyr}w=]~uut~yOr/hNo3>ϊ6x"Wee*ctWټܺOsUEڱoY$'P貋H*)\/q7NN΀g9-j+3㒨Թ]#O~Ӽ:fcП#NTh(]JL j uVzù61&.g ' E'j #F:*q`Q$wAo!̣>I\k҉"Y=QQMes;CbzaED6gRC\AԨT((-b\H fX{O@RMj(J8#Zx( h8AkB2]`H% A42g52Uaa14!)АtXS,u3̖Ej=)Ã?|㈭h'MuN5s*/pCSEѐ*"&4tiJRFl7s_PvlڴC΂}L<:A<KWRq-UAdQh`Vg R(X2^Q!քH< h8"Fօ$rlkk~9F)1w1D"vL޵6r#"?m>,dwA&v`,r[r,ٓa[$#lyxԭf5Y_  &[iW>Hb.bd5 l^ָ9m =})jD}XaoX|(5K&uv)'ʹPJH*&X#vFQ楎'̥]uv%Eݱ^T^]lԋNd!*g@2pӅ㟐\PJ %]^܆^}wlc}FTX/bJs+ ~|G VԖ Fa^L[Jz15Tja[ W*mI:k0%'yz6L,.N:f˾  M(46Ն㌙}vAmӴ ҴR4}9V`EipjkRwTN7 <&j) R-sM缴rh_J\_ӓnz]1y$P@^YxIR &DC d)uǫ5 g&< mqy7]}v9u`HRb4%(j 4/@Eށ&Eb0ܝ"̍hYkvfj3ZnTnIdmP S\,# : i7%G0!NeӍFac)(CR)A.:: \l)+[dT/ D]o4vF}6ݡP܍]dјx'󳴮Iڽ*öj5rᜍ7*; ȒX9p%tD eEVK=2ǦA'^s[/RRhB4$@nt‚NB<J,9 dJUJN23nʸ T_aCj[XTSDMA$9`#cY;録">rkߞ۟4= y!žB, T~+d>d{AV6*]&RNʴ6J0:jҒtxVi%0ۼ:KB~~fjW^`<sS81 y/[W.Jh6OX pP7z0_npF+1qzjD=?rχ4?V{PcGfNjgRa+J#;{j-u I>ǟ&SV'{7k;UT2$-ڴ`jr%mVʣ鬙}eOmtjc<8 ih_Z`zrMBkǓxo{ @N%k_,zv^¨~pwf Q]fC{ɗd _]G_#ppQDd0Ŋ!@Qh5R!|$M2HOPʾ:ذP.M+=ba[jJ$ƒTFyW([ T]P*N'{:<=?ݕxbÒ1qɘxDoMwvvbJwntMYvi;UbQX9LyQ0ojV?5D67A>)jC&K( $V()J\1 aE҃{mUlx~657)h zWz ![@]fh( KYG~Rf.E(x!ў=]`Um ׿͓>MSngkh {}mki ZmҵlX^-H\Cf\M 3.Y BDwú_~]PjJ[rm&09"OMm jD&1R7QYڏ~ϼIꢤ$[+tqii: Sd3# b ԶT;icڄ}$1 z_@DQz\**FZ]`7.Uk7"};_0g>|;N|*˳{{ph3AktV`~%<_8u $B lHl 6ciL4daSASZAF3ҷh!,)#?H$#u$$@p(P*PRqk+.:f<@tϰ?=^̉6cOY5C;q$jhvϕC͟I=]@xaBDY< }KAQʢA;k6:N%8yiJ6{)A!M .%JQ:P$>R3r8+xZ֥-X~.{ڜ)f ggL%(vڞL 5@񑡥Ѐֱ  (jj(Y02^FFA䤽Дtbh%ZG1]j iOR Pv";lB'%OQ",AJّؑBΘ|ŇNDd؇ kFWg#r\2 ۱tFΗ Hߒa7%h O U H˜F#JI 2F_x{aن?a]{AhvV MhqZF 9ecB3I"hz Y Mp:gmu V83] 5.wWx=S`βsl((f\Bdj|ſXeC 3^>>szRtz[YPEBZ/IYRł+@V]9$*hN^1Y s"[#~Ycv BK@`QKj2uw9cCwN!/2- KW^Ћe<_j(Q" @R)Q (5&I^'10KVed PKng<竑^ILIM3ֈ&ڹQNy3ÉX&#ЩQFě2 ȲBeD/MI@rF1kGw,g@9۽vD(w0*_ޕ:BFY&(J9⢶Z G(Pzo(beEQ{ɀʲ.Ek~~f0/g ‡EFoFCEF',:`rQ8zJ]bIϋv SE @o,.+AShGrM-T> |tW)O|%]C!f:JK엩zfv,O& Z)Tn,rhEdT.XBquajyjL3Q7O 12;rD"kDigFlũU!KwJ,ݽv"sK7who: krS by9+m<#7t07W 7?iN(7e|HG~Zj+<5litqS#jkAZ8P{FP;?jc֮XPȂ5ޥ5<Dia@LYf۾WHٻ6$ ]v|f#&OE2$eG1_q/s_rdKZ/p.$>H9 KDyfc) 8ZBXIj¡=Xx!Ie*`|:-yA)ıHtWEq6) ;Ver(ˍG$[S8m5j;*lwAMCAM*&?:%(>I,(W1ICcE*EܔsL(1`Em@#"qGkE%JB">n-Kޥ;'K*nk'g&V J3-5yNIT NFJ.̊D;9.PIQ,^}\]UcOkь坟t "g(#-48R`bJᤱObP`6XlJ;d-o<ڛo:͆f!}A~6]TxŒYR (b);TX+z y=HFxy*ֶ(a`8]?Q Z1õ$4BqQ Ą($$Ra!-]3vn*[ WtpZc 'ZMu{Ek^}`nw?qRGXԈN|B - U+L4Gt$8"AiF FSߪ<ߛA3 $=X:AC-u!SR"1U =RJbȼ(j azOOZLq>{{C-r/*Ў+kヷ=CY γ3UۏyI\pQ.|bLI)6#u]DAb(Sl4T0 KpJ̩^5RB(zeԪ}ƊQ8|6ь4׋ݓۻmt/Z'Y읮;d:s,!c/EXӎ*/uĵ:JiyUL䲁>fA@2۪ao{ 0]%R*uJܤ"*ՎGQ[wAy퓍3[\'ݜ{pq`tN*]OUH>f//I"Y=Փ;-[f50_K8!(cك_~O iλOk}Wo?U_;x}ioG zӟٯ,0>:7_u  F'gݾ_%7_3z5hnӿ~m/_'iW#&f\/&gκw?}}8RZs>n)\haӣW1:K쀡u&G9mE~hyoF'OSS|۷o&~o3,(r@!.Rg;\ 4Wˢt}[%X.dlN1K Sxڡqg) ?4HRK0-+@ĝprij{DV6lcw)޵E&7"~rRk| z?W\{*8LS}'jČ{f|z$TQ7PTE-7[\ls#`8~ ٪=v5׊ Z0M7Π_OxlE) 37|V,ؘwńM A'Tぃyf,"PZd=CNR#GN#K/SFxd$m3%/ Ğ)Vjw.Z|iӹVT T`ҘymhC5m^#J/q'BOh]_cxc!ڋ֎Vv|o&4le< u)B H USOb}:7A\uSNǝXjrI'4w";Fw"X"t41<{.Fn⇜߷o~D/5ݨz&鈣8oG(+Na]Ye/39Z )tJ΋ ux  m0&ߖIht`burUNtZ^ԧ`*tCf:ʪΏ#I_^lX L.Fq|٫debd҈њA۫phAܘ]. p4 zyUwmJS5'}n\ swY^1\0s,6ح;kHE>ג)E 0[l%@kCl,m SR۠ ~ʹ<@ަ 7`k U cJ#68R$#T pɨwE%N`>c]n(C2mstu5ߟlaC$g_NC?@d Cd0Ʌw:!\Koh_wu6K_ H|r`^uˍ=1_z {S_rPscP+<%h Vz`RMOI5LrJ͙"<M0&[;25WkeV֭Vl]W.G73k~m-G$|b@bob\%*w=֞d?X;B+GJIu27G@-lPlTWL!)dH /*Qv^]%*5oTW/Tߜ`c4H,F,*V.ovOWso#'\0kGfG_h0)zMC1Ga~sfoo?nU SRbEgcv436Eb3 %XQ^Je,:u~IN= Ʃ~. FB3 Ri$Lxe% m^dnpRRE^*N@)np09sH[G‰uXT<ywfUv]D`ڊ]Dq]99j=ц&sMt<1BH;-']&49"jyEfB|52ۓ?(F>EBRMh!& 2w=$ -<Ђ }~#Anza )&(!wR++jR f--" oZm['{"`xY+1]r.1+?n2ߊϿ!}wON֣`r4MGa (W)G_ 6aW*ekvѩ!52$fПSv* e0ƟGu|TpKCoW V֩2K$uCvޝ$O7R3,òK|`pIJgR/ ;_fXdK1'gðsEY"jMIZ%hݤRgɞWuR)%rޘjZZT{0YQW@&l_ԕ) $ͱyJi&%v_]%€\ɾD-yD%o3Je'E諸Zʆf4Nl ?w"xUhV#Y}R#˵>g48*(@:^PsB|n$r.IԪ@%&ZݼY֝<ݲF?!j3rz\uuQ)n+Rv b\=RW@xoU"}QW@$hUrp2[u "%#uk*F]%ru;o]%*lTW + X 7*+徨+)oD%ezJi2hy˾q0f6?64"g4`CsKS޽}4{Ydp谫FM¼dFQ"En"qKQ sUQcj*rT8JEw|fń\k0dM$겜]mLW"ɵAF"CV#gLP '*p2$6Ҩ%dtF%٥KFSYQ8qU} Vw Ymt{x^@+9I9ULHkq wmmSr*#ҸjٜS٤6SNM2"e[odII4U!83F ݒ\/&t[ĕ_xVu>t[4~[%Yٺ]; f[ЏRcDϳ (&a#3 `P5CVt5tCQ[BQ?x:VnqX;`ՐA)ضpSR?J$!I53$F,V2ZKўd K@}P A"y(i#$OVh$!sYDV&nȬ'*j'so}|lhx!O΄r68!uYTߘ~^T ퟋ ݎW788߃Z[/(ʿ@j*X~ɉȂ¡$SCҞLe%)Um1Je*SB0e|IR%IRP4$s TX 1st{OFa($8Ak"XD<#Eg)6J .Wk11 #&FĶ&{HBn|pjuK;\1C-S_)!P`1ցZB!2G$6d#$6jǔC%.X)>w=o[KlC[%2#ԇ(ު2p~A0 A lC4l%54( B€CJa`ҠǨ5)딉L|ݳc9yL62m4\BC}h^hcMF"G8QpbFgyΙyf }.TG Edz'i:y[DF ԕ 85>4Mc,a08 LITI#pAcEdF9zcycc :0 08?@4deXBG6Pt{"4ިU`oHlYii8q cRLeׅQLǣ ~n2L;x8HN^Lܸ?%~U|h萩Bh~dlP{{R0*[ɔIVaB2Nn&0\hI?ɨŜUb!![rW;vv"$B"> )ʠUi%IԊ3 UsH6-r U̳RhIɦm&2ea1I٦dhvIvs.?^Smi{tT>nv14뾔4EZ]NS$wzhA f|hF!x,Oq#1N6'l bw}|ߦGHG%J:6f41GG׿My7ñm~n~F-"_j_׳-j3+_(Mض>X"sFaXUT}P1ŋC6koT3'bO.OӬNح)2kt<榴0=z8j"ªգeEYծ]7@Ws8<@d#'BZi#-*/ !5Fd%m X G&,=WtI C)!lK|ܩÖ*a;<9Eȥ,g˕tFa)DZLq6˜Ki]Jq!$OȲR \g@E"Ykc!+6L}W?zDS3P:Ĝ{\zŧr6oppehI>Fdγlk{gvgzi8Q \ęDX+Qr¢F8=P7e/rlVyx$KpRw"v$qB-seYM2~~ٻ mύmڄxV慲cCaɿ{P:B+'«4%xu~l'үsӛ#nv6Z2ll\0؀́Z&h 2D=h}*YA{4ze\5xإG˝AUēeM`N,c4IZh|? %m$@Z )K|hAX8-?f5XlTmqS-3zߔfLgxqOfR܈hLɪ\hQB(uJ. N,Xx8-yfp2&t @#ưޟл{/0]Z' odr*BqʪBpH}Bewr!~nסE?j u+PT ~i`y^ċ/m/Y?-/g ƥtC^<*)(ɟ)̓1x&zwww)]=]ߩ{pIoW7Nw) ?ɢAq`L \-fjZ+,4l5m.uUe(/$O\ѫ*kԣab8{qzi,ofy ;>dzqї>"L_ߦt>Ɲ1y`7ſ:o?];^ 뻣&鋍X~G;dvt(첤#t1#r`?z5uz7tWKϞ,aˡO7e^]JrO?>%gǂc8rCo{}ձo ߒ,盉I3Li>ƃ[??|][_B~׎bqh%R#_X?jMnۦhu xYfL|<+مKdëI3>j[o@{.RKD)(&lB",#lؖCIMsuV쨨 QQ$HWq4MW*9υX~I-C$m͡:ӷK;z!N,\;\ߒ+\Ь4}`V-fJ0);$ZRzagas^pwRlˁ,R JUg_ץeaƻ 5) uZ ֺS7H(BԈFOn5sg1_9;O,%+$E#VȢc&k2h4XyUv,#I>JrR ͱtll,8 3-M2]4~@FYn2r&`2|ȒW'o[emR+jm ~H*CIhȩ=z}:/;U5ز:j[1=imY.5=m4%ຘ>`OX|fY_vp&?bnAƤ]~p{g >Kl!+ ;"'JrN?itÛ $M9_^eQaG'HtByt|\%p6U+׻8e)cױ 1Gse:o+;X:VrjUܜJj!sr&W\t hȲkk'.p{> ԇNwٴV&_+o g/^ f_CQ%~pp~Q-7cb)FO׳rkd%В%t knFfV=.G p(]|<,zzr0{sadVlj yN/[fs{\ܰ@x󫯧")=Ÿo_tIR<{\>y7W_y2+q#0>k"$zj/ڼij[45oݣ_nACq˫Qq0fq&V䃇]JQEX<#_֕_8 ͹d﷋ۦY/C!}ېF:xI(W&jtO:ۯ[y:FbZ$qr I)n2&OE4R>^GƆg w6+T+%XKy.vL(mXS䄋ʲԟr)'xbK'6=dn;F{n;UAvpyBm'imgYWxBy|H /⨘}y{BrRwA7M}=7>ƟG 2M]p&{Z9!м[έĮ{7H#lK$5w,"!֬ rXw QuQzOJ2Q+2JI*)WI3 LK꾤kKFi 6EOHD S4%! 4"i(]JL j uV5lb2>qM\ I%Z]2,FE? ?h>E :ZNѾ9j[xU*i[;Y~^GkiQy\ DlB:E 2PT {hR 9g!gu@^{-THƱ`c@ BR6!HۓUKg{z 7/<>]gɸ!m}qφo(p8<Lg9bkv 4s5k*//ڤ #UDx"]'aʞ pg6DbIvDD02P-#vkGl7+wlڴG^xH)ux >)(6 Hs/Ҝb%PD+5[4V0ኲiA5d )yT5!D0% 1g!օR-akJ_Dl?Gm "J*aR˜8gRxzAsADfkFC*.(<(`) (~*I#()+2"FzD|qth%[gk\-.qpSO!$iIt@4h ?kɊ=.qǶxrma-F1w\} gm%8 2D?.[p{MNKjA@-mDq?n@7gqm3[J/'j2rߴ:{yr&ۡ~Zҵ<;YʜRC$3 v68iJF &%0DŽ)&*^"mHrqڿ Ѝ5^9 # zoi?yOW̺q\⤄H\QW ȒiKĕ:VLSn6 ٴsQD( 2ˏMfdh-sZHC (|oaƒ@tTlpd\oWBܛea=T3뤞^M̉Ugy|q];]sٷk&CBY4@e<^  Qx 2B=ڊVW7T$52J1hښ"uI{ST)#EsLhB\e]r0WZ-WH"̆SoN'iCՓeӨF%ny\zʙW%u4pr,pp=\=Gb\2a`~Dp=tQ}Tj3+ uDp u4pɕX*St*SMWǤ "\er:[2+\ SŪ<CqםheV4:?+^8nTNuӹc+D ~Fx{C1['g()+(>;Zb'׳(ؽQp/KW͸Gbƿ81(6HOø2Oɞg)pɃ0y$_1mfp& hI*U&%57_\)sf}B8L>/L]+R3+-F*﨣 =xӨepg WFH}T!]Hh*ˏƋ/S+:WJ{}'pd}'DaiKBAMO%dj!"PJ%#]Tm W{{fz^ =䪣L1]+kۑFR$5` 3`<^3*5J}/}M[K]uHO`$) ]=C.-+ BWES+Ԭ!]v^R1(CW.@{=JfgHWwjAt:P W@O^] B t|ʒKAVvEȄW|ہzvDG \6j)4 i(=YgH.hAt5\5bJC^ҕ%[ ] 5?5;J#]acDW,?jejR > WC{ԑ#J{dLtg8.e3y O-_NN6%ơW(>^0IAQoѸt6g׏D?~o[fT>r<< ?B8췿88kٿSg'/?wp_vݟ2$ykVgH]=?gva|#N6o6>O@EDr2{?[& â/߳o?߹kA:CΑx뇟͝cKC%m\05Y:ˑQtn8]H %U~|kyw;6{О7[$gMLW:uyOp99"uW#гn*d>eKIYdVL`fuN%C3 ]HRU)m)K5wK0dA#-owBҾl >BXȃcFgdɺ3>h}jŻ$W-wvZxN5bU71HGI+*G@1vnE"N Т-)$qu$~Zc曄E)qh`DU5/ g\mFO!jT'b>#3k^,XU!kՕkʧMpS7B 37%>{(R.I 0kϒB4BF( ٥f ӘJ2P4@ZE36Xvqo:@Gֱ*=$]I QUF2bx蘊' ?CN`GųE 1ʃVs I"922.e(5.Ne1zOC }Y'H@ɘukx$ۑ oM`QY,TGת>/AXżmC5YKae0D!v*"}Ʒۧ&̻ڢ* g`*a2¦f@ 2q_HfhJ7a#d*iSLEi8 8؆kCW$kЭ?xx@A4Tm\ *w/8 b!C1+5P҂7" %;)jI-, mO]A#S0 ՠ?l:훫yٝoAK:9;ifnuޤG&Nwǻ//2.~{\Gl;\na{iREW 7w?ڎ+N/s;;?FhɪxtGGDChm9V=9>:zr֚IH>Ua %9Fq:ZvO N| 1qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8Nu&+rqyڧ@8(A@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N3pIN H?̥9SwE(Xq=C'd 48 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@ 4$9ڨۭ O 4r|N ,N $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zFNntzՋקch~{}@Kku{~U6Ow0 א^q _K2?yP ƥ?q8RY(ڽCZ~gZ]%EWt"=]] ?Źj5h-=u( ]=C*,Xkp] ] )Ci3+3DW11t5_ ] eBWϐ9kԂjYpY ]S@IBWϑ,WL00{uoܮ?zG^qi{Ƭ^k^){d͑2Roauٝ7O08WgHG@x0GÛhɾjm:yi?>lMS?^~DkD#Lsco]1y)uLsRE3ǑI־o޵4#ұ. 2ѷ݈9lLvgئMRio_,QϒX6 2ߗ23yWA=>UtNI"w>e&. i<lR ~DwHBW|=M|M\}4dIkﮚnܕyЛOlysL'=|0ʌCɇz{pKXܕHjRmeptW̲+{ĮrW"t,J:/,)]}@w%T `hU ]5iUҎ#+1Ů]5q9w%҆LzwդtWN?b 'PN|LDs2y#xx}hM墥wq*SfLhMB˸ȧߖ"3M oW39J]tJ_Uob1-gD:U::5Uॺ◦vX|_˹0N'_ kᾋ7OxK>-|5]?g9On*'g8SU_o7ǷR۩k;il]trh;շRڳ'S BtPrkNəLz#X6m\V+oi~vJe\^Qls.zu~.s/i2k%W{~}N҃/E3/;sϭO/Vޮ]ߧz;fu퇠&?,uHoK1?=.\bΛA+e't~TChl(r(,'Oi 'gOx=}b>Y[룛1(c밃ڹ+u}~] WTԇ|C}PP]$rh.,/촭Fˀ@2&rJrQ="y!.,j#!Dz@lO.gw_{4{m3Эetf=7;5wQ kP>YlEE9 0dTp\V6:jGՂsЮ&J1j>u,"rj*:Uʆjш6R+bܛ8[8;ŢkGk_ٳ˿m=}zU\>/젉[]l.uIm/)^121K.Pri5Э^(Ch9v:vwS[u[ ͷ^$ߛ-g÷^$~טԞXË( raTキ*e9,((hk9A[obJW)@6[GւԊ6F E8B$QEEp1-W1 {pM~E pgMI2v܀`rǓӈG{ۥ° dcƤFǐFbQkHDS}B$|@P P} WqHgyp娔.\)`e>Qm|Eb&&.# =Ho<@#ܵ̍2%Z +(S+hʵ6i2H(v!'b` ]pe40Bo!b:c A~!ղ=*SXoqmHd|`C6 rؓUF =NPnbx.wNg2;MSϦ\v'MWWa5?;\<;ݚ }Z\;P`.v=%T.;0CYwѱ>CohCo{" j7ZH[TT2WP%f,&Zd]b/L=!VE%B)(AS hAjj:vۛ8cCD:$ˤ{Hy zp z:I˩IYaF`AFcrP[$fW0kV$ rVXDLBtUԎS%ì\vThmEDDY;P7J!]ISqRDk[m޳%,eQzȓvchuXGoRp=[AЫyXs7&ݲInEGe $͆qa6x3G dq:rMڷ MJ 0WL~9Q%|˝s t+ P"`Ռِ^+)$wRHxx bik xNВ}r8C%_RJ!yb{eEh$BQ:jd_9$^['@\Dg&ڛ83;+pq,yI$Zm%z<%d2D>:9d@dN)f7:5('~]ϾzG:_;R^QKkePUeD2ʵAU)dڱ?dY Ϣ9+8 Xbͦu#t`4MJ@H gkjs{qp$>q,~V;>m\\/a9d8{͗R!&*Tݒ Yʢ>J&'Y{4^wކdf6KҰgWnGFg^r#9(oDۻ`FOiuo_z8jU:pr]AuSu4[|9ۜ33⠆vF 'l*}Yv m( *&V@)s;ty:|^$/L)7Aa6ݞα?!0$)o5hviw"X -!0$:VK!RNgΤYr}mKQ^^L'y?z\"_(@\*l[a*(g"(XV^X~3w!h)U=RX!(Ȁk%F^EAI *,E<|GC-߇P7 7KN\TtH)k1k%4[0|RaX)c1 KXb6 Olˁ ,R;0Mv` uecOjAc8@@VÁ|>?YH̩tRw>|fg-T; ]de:nkL7MPs]yA6I j 6 !l fqb!LbRӧZ<jouw:. Q[u.dUp٨DBQMJ( zTݛ8#K-=ϯHD8]џ;y+.j5}|@;;\z\ϓb mإ4{ZӕXZYV]YUEg8sęřѢfʩp %:N6iVS05;$l`Q촂@9N& 5Z HIXr)h88|H\3M'$ Y]..7o}Is $_/m^hQO7Tf:Bw j݀ʌtrleh5PBB$fm}W}&dEdAۀV Z2h]'Aq۞%pX[O?7̧ey7\˃A:9Kх:P5FL*ؒYu,01Uj!AB!8\|Ii@ݾ>IbaAeqAR\"g-LT#$TAS>Xi%k)F&*֎8ݎ} .`aL,գIJE.Ei1]?NW9RR! ϧOhx-}x6{$vOPAAqI܈L|rIW*H[-[Ȟ/%TAPO?b8u=2/#3 )O[)a OO$6qzI7DPQ)QPcqœxmҨomdtE;ͺ(Z4u6$ NBQۚ]Ǯ-Km'n/]$ߗ7Nƽ!Y&upH&J3#18JOq3)4Ig=;ߤqkP]F]y'Kzq̐H,aM6Mq2ckɝZ}ʞ;Q"JrJ%X:nHKyUqr=zC4(z؃7Ͽ5e<9n5O&럧.G}Noó/?WV.. ;TR;WPŃeoѵ޸nOٯQq47/*:<jJٳqt~@FɎ;w>M>A7kՏxb.:ݫ3 |wCqҗ3S_^v04n1:3NG&07qfPvQꅪO 9sj'y`:Oݴ7Z7Iޜ=vbpWiƝ'\us)&m|"ˈK[ RD 56l&.B NEnp gsi::y?FkE6).5~z'I8HC54ٛ~ 52N1)~9hNRqO6Wm1pb0Վ Sye& &nXtyЬWs޴FׄܮOn$ӫs4LФj_mI}"]sdT0θӮgΐhr mjҾe sca8zϐHFAKӊ1$y$t,ׂkw@x4 A/6p"TLKݳT T`ҘymxŽ(rԐbMD0ӼtרՔn,ڦO}eӓ֖U6_3'#&BzQ|ǜ`;ܝwOr.`9nl!B<@rFϵ,wJÃ&}-klٺ,9<¥~x&+C7xm!MlJa^iGjd:^Cc.*uJY1VFi T&Mͯdz0;3o/ Y.B/A /PFY4LrI\i髵w_mRnGRGRM/n-,n,M)B͉bB(Y?F /e{D*I,H$MIE\% CpO++I64Y7R O*aO." \H8*^6 R+R3Ey2aLվoCO=^-+{pR:58l)1!Z.]}XbFJ&1̃8CMT+)%kw_~(a"G1CG$ у\9R1SD m&:%L:GLG eZ30{-#cԀ'%e/Y3_Ew)A7^G e<+nSM\9z #t^Yʹ3ɏ6 ^`T9uJWz_r)hUdx.$KLd_}jW[Xѿq6onMgۈ,{k껫0&x{q3f芎 ޜCzI)j֯_7]2rںa%}ޗ& ^q4ym?n4j˺h%N|l- ,'1 *;(R/Ί}NҾʎ" V3G;$Q.#R#. 9 $=Nƽ޶@w)gGE8<GHҒ+#4S NStEhn=|"¬Ս -,`CvY$PNbXXH7jkި492u.Dk'FkK%mDkٰ= HTyLJT=dAHcIŃ"R.$6fbR`̺BZ&P  .>jK .f$5sf")ckଦ*E=ruO (5ǎI2nni. 7͓_5Mۿv~yLO% 8 T)J5FNXFS(IAF)  v)6U42OP& yIt;t$LEt!Zحb~ϩB[ÎM6S5!rrQ7;ɼJH85RQđ$(!2}"(☁ !fE;$ rGQpXS>,P4ez8aنa~ E{00ǘ4<̭&`JX=7;&8jZ Dv˂j҇,U డtS/X" A943Gc-SYM/,XԞKdl K6eHtqO&')BԌ#Ŕ{ӑqi4;{ +5ؔ[ a-w?coqrkޏL2uJ&96L yx.cBrBl{h[WM%G?/@M&Z2Y+냉 tkʈhA #(H8L:s&T-iڵIʜ-4ihBEQfErx쵲2aDD&p TQ͍Qm\:3ʷǷSG;Aw]CaxmMf@WaHQ Ժ S}Up߼Jx_c19rᗒGR\MݎjB;D5mK֚pA(~Xgj10.x3߫ΒR;wMH0Fa‡ӧFS{B=QJ9>9LJlbIz3#>9˪")V'|T(OD0tcaL݁)paWA|rWP\tb6R8InWr6_uIi읞$ Qci:o:ux)H٢\J*ttt8_ʁ*(p-Z`ȺFC 4}as |,9OfbTz7l&y2CuUyxt6b^0 z̅n[gs;gz:Nn72F4#]4 Cqyd9MC_'4{ݫz٘h%HQl] Z$zHY?fl+QS8[YW\gp G'/_e}߯?:yuWπf`\oP|v9ikh*v}KߺT:nʂJSvY)\ ¹]t0"!vWg~ tMhZL`ͅhq݅/I"D1#AyAeM / ]t)}C/_]/QZ_Là œa,/ȺP`恷7W/r}ruD(ܭ/Sr\\]nWҼogފ I;g7_k#3$C-'Е  [\ՇYms7[y9>,)~]=}i-;9z)QL91 ?FOY.Yyz. n(c3]leoV[{]Rʖt%T`>u]l#kUbdtYѕzHVt.֮+-Z2KSeIW ])n4Kisb23m+g 8,`i`n wh32*LY|\+n:5 ҕ`FWKJijוR:tB]׭j J3ѕ&$+RZkוRdMWѕțҕHftJh>וR27]PW9|T]qΕΠ&34J)6]PW^"\그_(%Ga=4-DٌM+m>J4BM-e {,] 3wD[ J隮֨(u[`ތוRtB]$CvR ] -z_Rڀ*ҕ3=Vq)Xѕr]WB7rtz|UwDpwDhz&oem6/Еo:5,c :6Lppw Z=ʾ+l^ER +;USjv6(jTV{ |QT}AT^gEU6ג iN-gV -(n63 qDmhaC D^4δMff)mJ(*sҤcfsFq[ѕz]WJj09C`lFWKɊϮҷUg@U0ҝ2\\3XFK ==UF"*4]zD 9ҕJq] mB]WJIj" !] 0y0+ 6SJtB]9>!]9]ft%])J)5]GWLRv^dWBJ)jD`KeK )A[URn=`HN'XpiGrYR5jZh3+v78+RT}Vj!])p3Cq])ε+yMX9X\3Cq̘҆\2a u%J?NgPq;Jv])eh/GW񞡏gQEe;u=2JlY,Ul:5>EK|2ܥǮJh]WJIj"@ ҕ&+Eut$rEOd(MVtΠR]QWҕdFWdWJ %7]PWP|շA +ϻVRH!_Pnpڷ(*Xger } ص'[,biQ!MdWGI%Hmw* 'nᴲAKucN;7n7+C J ʀԆV8Bhi$TɌЊU+tB]ŐwP]Őpծ+e+ P27kU輥J7+}ڧ=HWJKSJMW+UhHW Ќvي6ueW_=C zZpZ|?2ܥ*]jjl**5]z)%])ft[ѕ&]WJj$Ӓx%]eJicٕR&nZ\T2`JqVƃtK=챌2j*ب8/l2ܥXFϮ`K]|I!mϽܳE@-(w`Y0RqɌvM+o֨iy{C`2tGTqL'XiNRtB]G#ڹŠdf2֮+tB]%CRJqΠ.D;JI藣|3ʀҺ* O(.2JlQ.Un:5+X|T])nF+Zv])%妫 Jg:;ٕzoEWJ2viZһiMvM+eMkԴg[]iIkWÿ_^^momkz]]χKɺ'9}zWu [AvjdiP 03Y]iwλP5=2dCR`3֨vtλRvkdJ!] 0;;ctb'JynZ-eW%3FCRJ]=Cg/].}6. iX+l:56+fft o\HוRnZ(g4+7@+utJ]e"o3(] .{3וPt]}aplQݓ~\.Tyߋ/bCN{Uw3t{go+t?HR*\K~wWyw֬ E_u(cCv޽s{rn՛ &')ubZ*݁14|w]qe/xꮺo{|sr]Jik?)x/X`~sI]w_~Jy}';4g"j0>Ҫ}⠾_;ݽyjY9r]HSa}i9rYqD Y|-' x)=t+ɞnϛ.y?Zv~p`+?LSG4tI?c0$7vG1~C0 9R{7ۛ[$tKok߯h/^K~v=?u7 ۋW;\0fG}1&>%0ypic`}D9 A7 @S# Ɂ#a pK0;I!=OiU#. qa YbN%|Q7CG8H;BGi9'J6NS7MRC)Ai%J FF I18Gˮ˾pVNmr9;.˗o^IJu.ͳؓ qG$]/"g?!w\RTnQ>0/q& (Yr!˹q@<1L)CuOh̙N"B#F){m9_crRh,ڭ1Z*qS<ǎc2? MVsAj8>gD#>Cts )n@C(ν^,·097&8py0hn(IQ Drsƾ#!EOu=aN!v9LQNKSC:ƥY\ɕ: <@w6KS1a!XM("jq{Ns~ D>GЏQ1GazG]00. f.Knvq\]Ę}cZ‚.r"L҄ Ps'i]gifb$袴+M$ln{!N<(G,*O$ ku$paf킛ggpQAL;K'uՇ>z*˒HF%q;rmrcr!RGf\rjYI7sXi|c?$$^vSXzIA: >Ju6Hk8s߃D&(Y4 R4$YZﱛFɠ(r@p7w4 `s ;yo kGnJRSXݤ+x2lR ݄I BHr$c~LX(%c~pvd$? dFI$},uҘE$Lq3z7Q2!I(׎ǑmTMgZ{H_)ewvЪAѴn Bĵ2 +YOi(5 ]̛7='2Nt+)f20[|J B@ t,:\"B(,t0XYC3a0 ?%Ad I(oJ vSȆ8oLWn_@ibdKhΨD\`(  UTj-x{d)$h~E\PSyHݫ+Qenj\^YHI(yIلa``#/=kg(PH*+9JOQ#q gUV Ӷ N5nʰkQMj>fR3&"i0tj4Hmn#eیYUI5QIbYjÉs!`:} L4&^r(]\p>nk-aU015 HP6yF: ^x1:p00Ko9@C6+*ZPL.HZMUzr AMj ,$tD^ JTx"D&jZV"aA>X\$Lt~8XeGŁ6S ^\D,/xҁ7VTnh ^.X:W& :UQ+ɉ⑫w%,pi[&sJ $$aȝLVx//l2X}2d*,hb⮆XM'PGށ]rث΁6Db6y->P!jUWP Pv1 l`BV An!@*(S`qnj@pAs^ϠB.ds V5+g@b-ȊFyN @MʀGhͱK;,@$@frP%&JI>2@!~ЃA2AC("ep% ~MP핆9yeA>F/y %bg[Lkmt-,feB#޼jH@YUۭ p/z+`^UP̤ ?gNV%TF`ҹdJ`-AsMx|ܕ@itmx6-m]k&nj!@3 L[΍e(8v`)qj]Ь!v58EMZqҏ!!s6,ѳdVn,2aFZZ |,&3)%@&}*P TMI~\,nY#9~\U8/ڔ?CxYkʾwhU| Pڀ@p rdС 5@+hX4SQ#j|d${՞&3t((q˶ %&I%d~ Bi0?iל 9$U~"r٥P#Y 6EL " K(r - UXc.MQD@ީ"DoaL^3@]-j Ol7fR;`aӌJ,-;Іd!mPA߮8=ם!v*)tk砑;7'o/lmm>1z>]R, pjv I[[VK7e/>m]>|#.c._~A1-+].r6oGZL滇{W.OF5'/K7HK_,Fdkuk?}'TYuB']y9v`@|Ӈ h (~zN "'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r=^'_}K:>;'•`@V=-Z;~%r="'\О@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DNGjzJ Yy8N w٣ڇBΒ1:S@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DNG2rHNvc2MjZ@%'ctY"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r8>]ZŎɯӶԴ]u._- \w*%ϗGiv9]bMC2.!`s@%kǸdۭo\B^q1>z ZZ \p;jѺT2 !\iJ`_n;4ڼmkg]'==!=]^!H-Oßѯi;5%2|Y*Gh6G>يPf 1Nk鴤JےxFuI}rlgu]8$ ~_.'Mg,.igwgwZ p,+tya3ߎJcGJEp7ڑ~}O|*dV.9'.6FR'ߧo2 7)c>HlX]Iw&mN)uW*Ne}e2v*xk<$O裝IY_6cRTMJM~k+eKޡ6kQljU]ѣ[JD[=zijZh(S 0ЧZB#Zhο W)WVsEh&1•uVHu@pte_pµW-Zo:\!J4#+ +cZN \!ZWZc+=Z3wբf ^1obVo^ۅEkEs{A\ v=[y@pŽgB \p^pբU GWq ZpU C+DnW֛+4;W-Z`mGpMJ3ì? Bl u(pբ^ (գ+a{TZBHv$6=jЁzVR靻Kiv^zP<+dh>DVQY_3EYm KZTgCVUvjx)8o7?~SNt^>yrhs|2{7Axt"O#Պnkϝ|$~z~;[]ZCfͽu=ͥG;%kBmk\O&Ws)Fhߝzqw9VY=g叺 V]vaCߟgG_fW3u>))&+~zNKpŲxsz9rcSwG r Whp/.}_ ~(cgjhmKeQj/\?ؕVW7~Of<+W/=;M˶8ir܃ouS7чӶ VǷ/ u3ˀ&g]p^9,N7*6Wrr׉m:Y^z5]^w^m62 ]868??z{JHiݜ CSg*}q.F@Hmm 0FE:qڣ2%Cm)LL6#?$<(y:/ wą9|s姮 5Y՟͓I'd9#KB{ǘd_)8xCLo!as; H%attGy s2ԮBw*eyUdiX|Mz7dw﫲{}b֚3qzg~l#. (N1+]nVk2ʵSY5I%Lv=ẻGD㞬g|pn6hC%H6aL0՗~X|W$v&Z$cYF/I6 'owy|q>U)r/@\9bDr-sZ]Yi ? +d:T)2 ۞t&rrnh,dM{7[Pޏ12ubO뇑wL}|yH',mΧm7RG.ڄ di[q|^|<;*߭[PSGCi mXmb֕χkچ?jAeX06m,WOlq|DOtx= /Ώw_Pj3Nj(S8\G)|fli/Q'NQ>-j&}5rMOoSkop8]l} sޏu2~sl[m392Φ\ϐ~ƝS gRg::>ҋ& \Ђ[M' =ߞs|6^\sqf/rC*snsyT/1ܘϳ~i弌fu`l9rOr '~]l<]=JS^Ox_^\FEm"{  ݻ/ ,EFZefd[9bh45/q$9nVy׿q<'^oaQ Q"qGɥ뻟Q.׻vp?nܝ(yǿ\g4M]engnx:<߭rҲ[0Em"6\Ӧ_]7ބ8f|^J}2̻uN~b+?&>#r$rQ pQ%L"*ADIHB4uo\IJُ驝 7ƥ!: L6^1-֭DBXJ(U۷Q, B&f 1&1Y_]>rn̖\?C1q 2ǧ?IY!c"; ͼdܷ#mFGHY7&ܻv'm׵2vwǣyc\@Ҿ)$۶Mɴ* QiUͦ, D vg\l8-v\n6:r#}J{(F$*RMB k]et@dk2-I45*fȌ %0N:#"%1 >"D*XSBc{L'r5CP,-hֶcd+MiXEL& #PK2EчEV(_,JMT;! Kuv)1fY)%VEl&n4lg7Cu6%E..vqۈԋNd!*g@2py ?!%~Fx 8{ltjW94|p5ʋQYےdgJVq97Eޜ]Ap%O;80-6B y)tA.:QoIDcc <. =*-w>/ l*U=, |1rBF PBRIlj1F*MHڇL-P,IJ$gWE:J݀2b{xK[oȼc{3~ާ/)N%6i;L=zx3ANyXE@/W"] %v v;t&a)1  5f_cTtfIyG"KIbQT!Z Q!B0rEʡh65-RyUd+\I!b3q\2!u!DwF4 "MV͈uFؔE0b$%(WXl%UP I ћ 4,)֫~`t4,3_C2l`b⣎QƂJjE dtFFMsε"/[׾jPV nqqpADν)I#_2)CYd];S:y##>/ b@ 37Ġ#30%R(R"B@,u1BBsߑgvGRIAcA6snl2_>?}oyCuܨ[k4re2&Ktl]ݣK'QF[w(S§r(雍KӋƥ{`޵&=g"]R)Wk9 0[N)7Waٌc?^} Qɷ~#?߯ ?3>S~7^?J`-y/ui~VJdIiֶs֋^2+4KVrsbSY? gEfA^i`_4Cfs #{HCMﲦmC^=?Z>I>]Oain뛓oO'οY7/bk jz1[o'†_;:)lN}j7Ox4\O+QBdӓW>+;NlGo6h1Z%)t H`0R'1?^{ h Q{a&(Y02^J䤽Дt*9xf!Kj$_275)$F9KYY1e@Y/#)F)tR2%DRx|PH\DΘ|ŷ39 el)!3B:)K#^*&HhZI-77E7Nگ~kE"1*7* ^ €|FB H"ʹU#Yn겉oYHVY0.#^xAF,E;0& %Dba)rVCq&lqSn_hIS6iJ<% ImBYG226'G.h2hzP[$عG}c.Ym'AgԱ:f{oEH^y Z !P.dO&\+ЫlZC"ɠ\9QGSWbCQClh"j6+o"ڛ~fB\p~~\Vv{Ů܇ i;@/ "g( MA"kF8ilT `a׻3ˬT!UMGӪfٻ1aE8~NSx%X2T*:.X1\# #$[n^PG C p:Ek ג@ŁF A QHIB8%=6p`W^ey>cj/PtVp|5ۼ|@=|`qR؁[ᝢBa/@^%9ʜT#P ꈧ}G=kc y~,..ݪUO_G{L-tc٫RVI^=&c^p_gS(ٻݛ~ ת.>eos_fo~\jѷ+ Ӎ~%FcvJ  sq'? z'B?xL-^aUӌ[(y͟8>p6Z\Z//:f$}2P 1 AFƝI?Xy.\X,@mk+@ĝ*\8O"iiBk!nb\*?/_+7p?pQ)/ߥ!N΂_dRLkZ]XIi]OH>YC56Ͽ 4rӡ_ENī%g9 &@wͶy|1ox1B9] ހυ|w(6peƽ/%\?/ 3$7襭5/,uy/$fPoS1HCj|`qminf3C囫z;}n|;XtvtTK QPPI@YĔAjb "< B׺4K9DŽRn EέRȰ1rbrX+"D4ֽ4[fK ݠrsRIy?JZG^4yTn^)+G'1$11R.1 ƅ&>M㋹t&ԑy%)%y0t&B27Wkt:әl4Tb :,ӜrNbF[MR))-*1a֛MGC 7‭i@4aK(F[ )a/hYggh/?u9=Uo>W@[zs̳O%_HlڢRN>T,`1^ݗ^uԫ~Oe~gIJ@pS +`Ⴃ)TfnUn5wogoS8~wB1_ P0M 2( KO,tLKϊ^PfG's}xqAR ^S |wE/ eKm_<6${7lIfQqb/mvT6_5x0 < {u6y8T,M'ePacݷώ3b8[^FqVʕ:~Xp)ZDWX`Jph ]%tP]-#4{3g t Z0]톲i@W6=&D0"dJpl ]%J5B+B¢EtE𭡫WUBP*Ը+J%Et+ f0޷5ܝJJ;:@bI&umUq[*tP*O3Ay[rPiӓJF1aoRf NOO˧x868kpqp? ƹ˫o_oJV-i%i M\T[h:%4P̃v4}@4͈Alf&fEd?5+pd~͆Pc6K"q:U)=ɽJ2~9[ } PQ>\qXtԟ*NJ(D3KaaqK6DPi69F[2xG5g;}x,6vLtU+QnbʅmR603MnpDީ9Фk{!̖-{sR,]Z-{ IheS<`s-Ncʠn߷ L[4N%o;UYg5>/]!BT.`IF@# _l:[C%ZA)uq4+:޿g888o\ݒ6j>F]WnAIsz6H7!Zp (2e(4R~ ylh9zvtG3a]$rj~@$[S3A>*܎ kTR҂ߊߏM0&Bqyo>QIe'f/ }vocfI/E%'٫9wx006eoluZjGGJsGn${H~*۴ jOMKH[ZBHZBɻeCk-+Lq{o߉Zre á+ڤ`՞W&RLNW %b]-(;u`F;svЪ=R7t:mc(-+> vCKu*lZvǘI]%.A-ttJ()r] +D[CWW{.ht(qMy*+k ]%JJ]"]H5r=qq '\Q%fx-`E4 %íf tN(;Uy4-Ô`U֨J@KPJܩC+)mRW5svVhtRw+HWJ(۴" sf&#WW "] ]iɴ-XЕg!ō_M(߲1O`<j'2+|ZYLZ=Ўݙr9lGT}S X\[3O-TZ8ĩ8tk&,&tP2ҕBH6+0R5tp m+@Kһw-m, "+0PܛWiP8iO[Dzj%'n~Zi%ke^n%.>f*yiWq`K׍HiU"iU;+%]z+vb}/NfRi K@'fr/W?9;TLU ks̼17Q)%kPJ~rvj< 0}W.l?T 7o߼ۦ5Λ]zpҟ, Ha"G1CG$ #B*c"DgdI't\O_$wꂫ7oxbˡ{3PɝXۢlsXk9 zwxQ>'/@.@k\sR>.‚ꮇ|j#բbM]f%;WOGlS0HuﰠR.TE} -i]'߿/17@I˔Yߗ?.+1)ߗ*yY{503$)}GoKsexg$E?=fRΧa[5ЏGҹzV\vC4d4szrTSl3sBƖI շK92*];:<>ß\/Ƭ]gͺpXbyZQq*mVX8jK֘9'b4Rwx\RbH-j4cqn(QsA1ɬ7P{~i>N`sJH*@)z]B?IB!LTvD6r@$l` :htⓍ 6ѩ- NC>Ԓ/N;TtP4Kl/Q'2OlphVhaR@KSť"C] Џݛ,zEf>fԪZѿm7p∽CN``F[%Lko5̾i֟ ]{po^knv{qFt?T{pa(0+9C@Vi@G|X:TRɰSyLQH)NS8{HRlXKGVVW[зy6Av~>"bìQQ-w4DA%eSAH( (O7Sn<[֑aPƸ#aREbHZa.7+a Q+jw PMXJXư*+NA1S(O΍5bTݢԘ9 &#[|ą!>e6``pB^$y0p&B27Wk9,=G3 +g+4VTzJұ?Z9ðEL ,mQl=NeAro0#% 3B ;ƙs=ΜkhJcW̄Sȝ%*v{*YU)u0b7gzٴ Jpw+@(E*L< 5K]i/jn]ZƜ(a^7yѕ{ "ͥ< EHޝ^Ϫ(nsiۋIN<li= x;L&}qRGXĈNQJ nq\a 9ʜT# nh2={*z&y,"= @PK] iH{Li;7>2/#3 %O''uiޟm{؋+%ǟ&A"bt4B#oR Q`PcqœϥQ{뗎0}ɬТgw]#Fkˮo'2.xbyEH/ӿxf!) 0[*] "n nʏJgb^g+ͬ;U.0bc7&0zAXcHΐ.O /HBS>D2ˡvb sLM:E *xHHbOij{FꖑnCDm*1 {i<,Emץv ʭa;kNa;`>lAv0JuѰ B_tTIx0;s-cO83tEpb#V "nsG'KĺݦI}a@`$*4pp)BSϘ8PN 1b"i/{Wƍ׭0{pAp+䐵/Jp?ܕX#S.ww~Ùt`sLnVxX\:W1V>I]M>9S(ɽkoO>mL3G:Ǻy^{>lZ_{%┟ۚȷFzqM 358 5>l~z+0k_i=Tx9MgT@i& c7 ^߼k7]qćOsNY?޿s.fqJǿ=NZv+t[Y>w5xaapM?N^yכjyv2Xdbv:k~?N\!pR^pU=o'<'j:./@wWR՜\?1/+-*\P ZI+݁W ky=}w]^˷߾/P`(JN""`.d)Xt`SVT %ł5LsٚpI3E:R 1z *cLe(1xgua}2MUYUSs"k!&?3GizO}tamzv3y0\gYZ?WAszY`,l %wޥA :R"X>dAO7|wݲiLht,xaMyI &q7{>`6 h 4糏}<КHlZ}p\o0_Z5nn 7e U6w48p}EVkl_|rWb3 P6ŋgZF ;\8'[:K'aVfQSZKLRԄuvt~NBwtٟJ:[ޅ&+E rCI$TNAIp91V %, mާn\Lsu>eGb)3hv"> ^HhzUSe gfS濍Q*ɔ!bH EDRcEm&fEͿՠns ɖ QIqs}r=!Z L&K2YH'9d] kŠ|wud3HGE!."JBa#26gelUn52^zA>|?"5Ut@?|} p|:mnlhGx< CT Gd"%DIHBrA)K0?֍vqvq_aѱ=l\jۅ/b s/n~|G#]E?.I>U<7.O=d`SN.d:T$B8==&*u^g{.ѶnR,ԝͦ||y_@<_3ɀ|0Ў(5zؼg'; ס<%:8>5-v G y)tA.:Qw:ic ?lGe6=ڧ(%auQRQ=S7 d3# b ԶTF#&$C&QX($%;RQ1"ShQkk%fr QBb0~6FYJx7yr$Ϲ_?Kl6]N={3ADwP2Cu7Э x $B H vcL4daSbA_iK7gV=#VG&36K;H]0^.JU@jw}fqZc wolvt_[ltW&Z z*~POH/lPH%O{@TF@Bb[g-y'/M.ozdޤ/1X\RP+UIu|3q6k8 Zzhuu] 78C|z^12gbLvTQ<]=(V 0uVR(|+^rS5@T m*Eje 1&U͗wyH<~SmSm$B6"H )/=!F+17A {8rm:B2kzyCqUf'O?(0H!duH9(3T*!kwJݣ6Qz'{<o)rh j5tUoonnҔ:w̦q`Zno 8R[F`Mub14m9Ύ2!~ػD?kg[ҞE-t Y0<҂1w΃Z60€8.Ohǵ/| HQ~v=ua-Z5YyK:;h`&er I "`&Rd5HR`TS# lBWb^C Pk ]Z{_Wnr>+Cw]Z;V 3ȗUE~m?7vy?]#>|2}|Z\SV8q{}'-n]F?NJD>g%Bd)O s qV{2@ _}m|#2zEӛ\q}!C}U|[s+8 ˍfWzv8_O 78/|C-N)|tLt:폓aҿFîu {^ۊ aoܥulewOo5w]P0hd4f;P:Zv q)b9w %~ڻnܣnlzog,c$T6l)$(ڈ@mdn NfY!J0jvmV($PcMl@G2rvN;p`"gB&%[Ǥ,(}f`V Gvmf%~S0i1۲wya{6V~P "^%URt(5ʨ! 5+[# Mu1iA!g B :: ̤ɖ<@ =KKLz? 쏦GD 2V emN[BjmJ4&?+:^XgMY.&JHRb,;uvXB(^e Ő٨@%zn4헎\nj! @6 y1Q(DGFXPI㣸d03بccZ"/[׾ _XB=ΠqV/Ub$RX|"gL\L2H6 k'}&8"G%bq4sd\Oe ZV\@RDa`X_> 3<^}AMe2.~ @#FokYtM l}  %ϰ'ь ;uL~ϊD7q{3cs c;xn:AQ#כq buQ0Ħ!`t5Imړޣ'=xTVyKvR+pTJ#+?%/-x7>Ĉ"ʶ~C_F)[2Y=]Uzpo4J2 57D9V FdEΣqJ!1"JJ)j5.9۱W֡dF!<_ )_c,2Q'ʇ HTbBE3ET"2)J,8)X6&NF[9;E+\Q1BN G.\ J4#d8uelU%+å:(q0 ɥjlK]!L}WWH%UW/Q]i^yY:q l+_ӳIOA(..{Ul蓎SRA!hLM`s,x:i;7̯.\u"; ;}2&T-oPCeȋ'ұdFuP)dTc%G# /H0!g*X3:"DhxQGX=qחhgy4& q'AD 9n506cl mÜ漾]=ե٠ΗsٯN:V2쮏3pyk5! OiֵF-{=K,[\Ibg4*|b')u~grO׻w՚׭ ^G)I٘h^oXL-MmKA=}mæ7M[MiQ-(ii>"!9u藧&jg/,H4Šy(i!Tkfz53w՚Uk Ukgid)wX-FyL*"hS*CuR1$Q*}$m4 A"!6p9$2!3⩣ِ9/SRXIu=5t)UVs܆ȎX;#-9i=dm=V0c5z4ob>T[GWmz=W635 ZqΊhcYRm,I ř-lqL ڻځ\ tr)aPZ)bQ=`@]LM *AʢA4@4Hq qfcPa_ k𷰞k1.oR< ,QXX|U ūOWS"X=BZ[5Eԭ57ac@TԾ0NxrB,Z#󺵱dm;h6hjișG4SЂiԐdrqN!kJKh-3F F} A\S &N &ЖgF2 ,Z#ՁHM4Sh>FүWաGN`umz@86c4Ўz+::W}_q>ܘb{B$"t6xJ F" €K4λ@A40Atk5ƒh=U /G1H$P(FF14r ٘•$N 1*YŀJ)qC 1&( 'gRXrcp YzZ4Q"1* ҉,s*)Ervͩ& 4jjF4O2ߟ^jU qĚ^a$+e:iZ>2VTxp8Nh kPVRߘ$dOehFfwM u+PԴeu ā߂ȆRJ< tfe?s5, @艌PKI$V6pgBRTWVil]6O˅vn&+76GןG[@P U wBS~k9G}켾|7Y?˩׺{%S|Ayz5 1d/{·}mU?M.|?ۧ_~7`<ꘝ,hyFG5S|0{NN6.llZ)+ߋ4Jxhy7N?7u9oԮ]Z&o'ߍ֒3Hsmڊ+FgCv09Z RsL)M˛E]~KkcF<˕^#ſG)Z^0/T^"ӟkR{&cK<Y`JgLԴ] @-[n-s.y 0u?Oj{~:In"tGtvo DŽ݄w:c>> 貚"={jW )n;ͺq &<'g讞铫O==|LzAHF1XiCR QI2 J2h=<8=H/"r 9>9CR "=ǍMɕVC!A%h 'IjCN o^#J/sOEǕ>o~[WzcJ';Sn?] 2qMkøpu>׌ЛkF\3r͈p@St-($x(ƣa/5޹" $F (̊hXfh?^0t8܎ ,mQж(ڢ= A/I(m!A*%47Pu^^Q1)Thap.O`"hai)<1 y`NŶj uT3K:e% PRj"SXC#w&@()Q P)1ZZL+  ϤG(]}K5d}AzǶv8[DPePou{W.ܡj^hw_ {4_͍}/b1J&ijb (w[ %DDQR5rL6^1rHw&A)zE"J) !4X͜M%Byqt'dWT)/]ceK(-x<]k;*lZ^6t} DwO"֫pi89W*kǩ#~ȍM23uff{`΀D2ObۜPǽdtY}7>i7=37Sϼ0r3? WgIL~Gs:'BP;yV]"š롛'_NXZkG'ӗtQkSp~46Jria>m: &ֲ7Y$('6|aE$ NoC؟wy:%N;H"x w*muUĸ$*@*;6kT>$폳c22P"i(]JL j uVzù61&.gk8B_2bWwuWȸ0%O1_%e+W=KR4&gp^z1˧*ZPʻN$@,-r :A*EHHk\9)N97 jК cqM'w=-:!,j8 s$% mG4:j1))$Jqη ŨZn\=W8aIm7Ρdq0sf,Ͻt҅t=9mr[do~' aoǟ6m9`Me`LLlDV%ѴRw_GQTTIm_mTmh`/dL6b0:̤}*N}Aj*U-sƎݍ;aړU U+(M9vdؚ@lUD\pl@;+D BPPt2D䢒99qEG!U .Bg}؍W݀b؍?6E#¤'xk<lq`>m#lbg=^fcmmTYG:[(_Ʉ6=(/T2S$-PVJY#v#fNGE/WK%wՋ^ԓ^bԫI5FW$dHY1r"pbAk05؜LMz!nqW}_4? ,^Ui%MُO,Wf9teMibL40@@P06b)aDy)kK4pC]g(uтMvMԄb88ޠvw:.M%šn<&ϷШMkCӼ}W&-y(9F.Ҙvu6Vް6DRhB% LNT {՚䙳s' \N *uwέ9qL:Ԡ)`\JC 7Im\Uͩj-fgSk7t)fXV59B\#fF~'։bSW.&cY["*4>a r֓(^=T!(w#疘LnPQ»O0Q+0/h÷p*hn o<ϮN]6oˬ0JlE]TUQ5&5uAx]V/賗Uո4 F'Ā+{3HD "Ej@nA}< %4v-8Ϛ~2aO-JผAM(A($9#RviV;U;OOi2M0^ 2V5gjT:O۫5d@~ŋ ;Ղ)ɒ$mǒWRYb \!aN ^m+ciB31o:? ǫ_Yk1Wlo}o %\̯'xZ)ne^wpI\f8<_jxpYK[5 Tz1`0D˲YlUjBRbBiU".WK͟dv<84a%ς/ctjivF%21>׫-7jWN.~5ezdo2,Yruܚ6/Q֎iInT7~=-Ϟ-No5iqxӓo.:f%ć>'f?{Pv?,|z2Fm^9xJYɁ^ګW_^k,Dhs98Swl@zxi5wwO9f~M[uݽj9>[uނe%#ma+.}:/:kjc ^pusQ4xE,.{8ͩ #=va\Qz e JTbM:9j=W{eWRBANw:rrz)5x5s GOܹt- jJy4 :,o^d4ǘ4ܞ@؈]ƇEVN;䛷aVWߗO# P?܏\6?~EUs'r1R qY,I(g\^Z2%F3frтQʱ$tyi8]#9ߋM>ۖ?r|xnX9W_z4ݳ_-8xX=٢L`sp8{wЇŰ6y:(r&ܸI"n. ?zk?@Ɛvd#.`֥y<'!SZѐ,,.`U(T8;HYy{;ҿ_-!%g2JLmuzmr%UZ#Fw`!=19=J*vs~b^hl~\ovXҼ9y}^RNOv\Q;:HPr`9a'9T!TS-ŀVHlt]%8y%izW<>@(oSk,+'͵ Ң` "T5溑s_g}rܦu,Ύ=[}ծVq{@srP Id_{vGـɻA3C Q(~UD?+ADž)wwY^Hw2%L5_T`P%JOd$JDAtZ"!JVHbu6i gTu$ir6zw;Ee-IGcko| gb5Ch;ϮqWPF5/ E_z[&aya9.S'͎" Bw-oDĬ]dž=Qf>&W(yq{Jr>@a|tx|1Sz1Ȧ:sЂrׅg2!6frЛ*yŻq!`>5L+sWZg6L5]g紷]!׈hX8U 'WBɎ}ߴ*!BIе_1X8z֤ z#GDA]+ԍ}C7 Q8$e.R^5R=!q4[W_/ s4@PA@Zc.H`(g&BA[+ QIrIpyW#meKb)MtVElZ6!%9/ jI`W9 Ch'YYdId.k.D^QqRH{,F19cHv Mg_ :\hъu;X1Cx0٢.0@ORQBO?1ǀP2P!zrMd 6*Q&F'm r8*4Z\tI&ۆ米PB:ƊpT+)5 KK->*R}dG'űYyx+iLGpYpn V.ց6[Dt)B6Y]*Py~5/yMn,qoo*pMg#.i,qL9PmoےsY6`Qḽ8I<Ǚ36ãO"8a;K=AONj [q )xʰv2|dN; QiP6 D9aeWb>:_˜+_& IZ~KQ(qm2mA;6r$!wv';v?dݯ(Ɋ^s܁sTf%|qJP]T:@V*^9:GYFyUmniBo( U jKbґ{48to *9BU3R!cߑG@[J@8!5ҮLΑ0c"vZ6.>݆?mQ~K4wr{YD-) C Gs&e1)5Y)I:N:{RUOh L pxw{)S|jX{p^:B>cwަw?A ;vz|UT1eqKat0.y@ΘaTbQ&c*HA{gJ NKvٸۻf%g(ݓ='= o'7OC`u5ch7^. L.P` FWCl{|۴ b}cl#[OaNup꾀μ}-7_M9Ǵ33ilK*9$G]{4Z/AlP}T9glcCcS/:سem2htf5XV?-ڜ%q:W@Ğki ^bnիȟ|R'ro[ v]wߜ\^Ғ[)ƮRN]D\YpeF&Em+Q>WMڳ W+`UkG 6~0+ xLE0hqz,jj踩\G6;ߦ^Q=㪟\g\S`?lpԦ'e݈p%=e7\ D\9"cz d\cUSkZָ W/+O ~I]5<\5+,p?~~rzEGT1& \VcUS`ʵ W+RJWMJ䮿^z,!Yĥ%,4J }f0,̂amhL3 ~= W+WMc&׺઩f܄ĕӤ|/h^hj{Ij*q]-`=W$\35MUSՄ/Wkz#N{W,=]TA?Z;)M, WM7)wTah⊌F&J=S-㪩d5qL[7"\`{ WM.઩Ufj*L:@\ϾNq%  64睊UJ=+Ȯm~=P,d8f@Lk,Cƃ&Xp%j ޫl*O:@\KI49;?KOƍWWn܄ĕsZo&i/F]Z8t\5 &\ E3%l&X5z,jjཫO 8\#'X=oxS.⪟=SV=p,W" G&WXpj=t\5&\ { f4 y+Q`ÄĕF+jw/F+Q:DqS0xn38vzeb׳_NNO[D5HI\nyWF:oO˙_hoV8g5˚l,w v??k:Α)_8=H&Dp_.Iiw}sq%Ʋ: huټhS1/' y{ːNn~pwzĝ"sh0uOYW{?e[;)+pb-5~ћW/qϨb^?s|YEY^1wG'Tˆ~8L mq% ur"]v:@G_.HZ8LMlĂQLFWA|mw^>o7KI$߱KګZ߅&|N! uq՝\jౚ칸-L}Њup4Qc'`;mhtCdtEAѢT! i6Ę)eEUɉZ&XݦFMJ 짙Zʤٖ Y 9yيo-K}(Th#j.9IKF96TkK R9Icxd.HRmEU3Whl$QTS+kùTlVSrHFxRX P>)C.$d3K3V"bcVQ1Q9֏'(Cc5hcfԿ/ՕȘ0fRv#*۶/ŤĚ(u4?Hƹ4*Bٚ|i]fzXn!Z> WiHnYRJJZA+Q< %} ڸ'So OMlC1?%`U -XP)jFJR, Uy\&guл]0yUj7xSq.*ɅH&f+N@yXOhPt\L*% !T0l'!(!f"0!o Hf !$#W.9bq a #A2^FV<ĵjXI.Ųm1q@4v$^/aBx`񵤟:9rogTq<ݩ8D9p,Y'M4U d,J<*LqAL[VMJG;AjYzCrVuL5K}bpYDDVApj7n ʠ[52\c,^4%6Ws(Y<(d) "C &E.V'hu:U"JbBP42peL ĀM07]X C8f'C} XUO4U'1n: L6Eātj*>'$nW?`"fY\5$r%,fqX V;AKJ d /)^wmYi``|Xl|$&5"$EѶ(R"#Nnv߮sTHc7uӝ1(QdFt[sV tY7j;"p`L ӌ/%RcYLu>% Z\A[:*]:∬ @M 4͙@(qHseԸƂeH^Q@$_mHPSAHvŮ\F)^j sJ "z_R]m_{FӈBU3h_Mr^KYBD jLc`V.2@b0۽@VQ=j+`"Кe8Md36/Wmݶb_*"uQ401ƄULl^v')A"/ڄ{aV9 6&jl kbBK5ܢ]-zDB&>H`"" o `9:py@xc#P YiT2M 1%CHh|pL Ρ&ZeF5:/3(|" H&jZ5 @6!+*]oq4 O͗Ud 2inuG-82ΏdP? "|qgE3#md-ŀb$SU>}tϛ>:YHև\dPe}4 \{#BKzt)}=Ũ<$BR%:.p /s#TϺ^H@SAʀv0J p[O hg]+u 9.z-| bhw@[vTh E޲N+}ƂН Gصo,댚$fj2RAٕ`?Aj880"㬪p*y`* BȲ$@3A6 ?Ft2%VmZb 40V-li`7RYIe6HVӥWoU4"{oT Q$4yZR0 z1dkJzv#[Fdh" tX]L泶͹ITuXIVGw@7ۭId3 .-zLaw ~)v%[Y:64k*޵%'ՓFCkn0&MZ~7lDŒؓUwCP^: 5\Q py݈V]~nA5(QJj *P22tQ%fTiOdPH;@qoܪGdR6v"+TO[QW6b]K ;XQy/nLo("ZT1"&7cbw y\:h\GJFdUA?FтۦwZDnf?R7kԪTg jΤyha3\Bv ZuZΏ;z*DEC&zfH֫YUqڂi#+(|5jNae+M+̀zreR^Hޟf`J$nÌ r|d8NrOVS(0qKkCWQH `~Dqކ`ݨ>jE#WnVå!0r(:ff5ФUFCtڟ@E WKB$Ti_@m# 358 N}G(E\om'|\R'؝K9;L.i:|>9Y?ҫ,> fmC728;yXJT*J<ͧtͿѴ)tt{wr|vljmXO|ߖ'tg\=lyxHMh!:je~ΌUᇰg„cQzˡ] DVFJRJ V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%UiyLJ (`n8a"Y)+^ [iV@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+^4+И(`6GVV:JrF8J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%U9xLJ <=`nG1#"+^=8V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+^蓩(Yc_g4Քz5NLVy'e~5[>AK p).5/\ 4h¥ \"Ѭ프nq uܘGbdϞSMVd S'žY}&s/?~r9_OR(7r:2^S+ͯ4v#غZN+w?YY?Ww1>CլKO:揓 S]]W:Ig㓲8f^~=a@pü"E 5_{"'0;p]6YHj!¹Y$s?Y\\Ӧ_Gl 'zhp.@<@3i=gԳ *&- iHH3 ~R<{\K,ݠY)]^zY9]t vr@*WQiGaݧ@fwBIw9x- 9lV.,ZQ"1\x:X.3m/-nV业rdJt A&< :ήl͆w#lU~ži:wHx;y{9ojwo}rس7h_.ݩ߶W~'b/r2NOp+)~t9.Ԥ [ ׮W`eV;m/|l=fuC[?B;ȭl W|X5 ꈸL5Wj^gkmmo/~NuN>ҝ>,˝*^ -?'X.wwo$&KhE+k斖JOM^{Y'ewX;\Dr(x[zaUnEtbI'K:}qN,4y977ePU[?l|V&ܬ0#<-N]M6.EOca-v{.[Gr*_ U+uMݘJNikʸqe,GDs?B\˜ CqFTPO/.'6F|ثppztzoK|}Φ}mwj,QMjhPFV-wݭMK%ɎclA9bf|+r>_shk]=X++Zwo{Y57k_젉[o:s!lrX;x3gbǝwbwa>nz4\LX=)r7#pKv"A[LSb5wډҨF|T}!n([Y ݁'NŪm2hAu/%?b„B8D{4x3Y@Dߎ~}XnͱI86v<1uSڈi&I|'p'^xu57]R+RFUVZX<0E i,mC&YZjLeWsoQf cSL㝓1"8@ l v8}u29ˊ>0w}&1itg{}'w>Tm6e@ sky?sNuB;C@ HlƣfzJ}Y{j$7dLdJL{dP} 2¾c˜Q5:O'N= *I0(Y2Wtz}oI6:|{OoAE|at\cEUhSSMXLsI^ s`uQcX J^֤Rj%9br զ,EQȱ29pN}(]h:F"(ڼTqB R 9}x(}%vS]+\i"Yr'$eX< FDnfSQJKraYZ9@VcV1"C4)|Bt$6wMuQXncb\;Tmڋܐн 1Á3ؼj"^'U+Wker`h*(]?X9@shcrcEh``(QrUJw*AFjє,NL;YonpcKgz8_!)YH}6l"&8O9AWErEʎ俟jC2oZ%[LsfL p_}[ahPCAGJST*qtr2rýNgd̆T: `7+I]x<~/ u qo_dX/$Z)\>Ho/ڕB uwF]XQ4q5krųVL[TNӂs026YأNTrY9ky,uL㌏ XC=(H=1Š-uBYV,iA%+5Ʋ6%({ޅ Yj.刢^К( s&ύ9ΙEgANd:ۺa#@wpX^O1GYIs)oƈ1`Ϩ+ H5rn+ ]q %x3ϔdJ<;c dFY:ayaP^t}6' R{K٤䤏% lЂ\ZP{ 4ިU\ڐ jG7pN ⺶fY;3>#bX1GkS, TzS5c)66Y݀HySTܧV%a4eių 2gQka :Vd,% JɨQ*_70oFcx Q;A_2Y_1f0"x6Vvj^E9dC3QHY1ea7l^nPYo(Ah%0F & g2dq1'n:qXst:ɭ&m'FSp J#BԊ3Z Us@21lʙcI#д"#L!6Ld1I aƔ >IGVj(g3lƨ]JţmJhʉRԒ,ʨ,j2hi˨@$މ@MxRO܂āY=]A{=:2Fs`hmRN7m_~RV I64dpL8ӂH&*. 90* >(/J ĿIߵ˫.te| b+R9_ozcc8ץg4d,8JJLE"H;Nxʻ4 [ V<^_/Z7pP욕v7ycߝ Inzf|fz9p:U-&lߐl/V{#i<Ń>wڎۼ mX_l^ޜͷRﺚ$8S>>-ݨ]4H4?ֿM/y7oM|n~G[v nj0g u|gQPSm hG)*6Fu;_o/je}>Q{2v.zHeW,M#56ӽc3kguc]M[]AG zsu(CY{Qn@ ;\>3еvH뉐hɊCBVeBJ^R3kT̒d#\(q&J6)a;H05wM0a&QM0TRǻAZb)D(1&c9植`2*ҝBi[uS*Ņ!.E%df:*C'6LٿIYǴyZ>%6sG)=c{췼JW}=y[ϑࣃO1I+ZZEd&ah7=MMdD> hA{4z(V~>l#A:*S !+`9㙕)D}rwI"q:3O9!P2!mi/֥r*Cʒl\X Z8ɡL#V9w_~8[K~7Ǡw'Mvn>>9mlRVqT7AKS w1pDXS2`^k냏;\nfF̢lT&x'ARԈgC4j<$UZ!DZt:%AS=Js@\41Bt1C4NT<0(|v#KSA)'^0lvZ' o}HVDxPڥNIcτuZlCG P>nj[v}@X-j("A[rMJDƦtCZ*)Ѹ(I)I̓18&8qݩ۫,6OnYA] #YWN@q`DhಷɯZ gQ^_vDz6GJPҲkk | {zE0={L] z??V?L 3ܥą`Q%@zuTIǒhhqXi5rL\j#̭8 %В2Z68ӭy@/_NtkvoW,/yL 1c +?bxJlYb982cQKbRL7_u:дQ'7vwĕ10F0diyAF *X4R $H4< n+3*50Cg+SO-ې@97.92zi=iĠDgGQ RØIq:nK3YF >R dɅϚX X%Fb:HNxt'S.[5ݒ #P~h91tV'KtZH-S':+[q͔@b^]H_+Mt+ OoMNH.& ˘"vZn͏wni&So,@ڪY!һ<|&3wYS2L?{WƑ /{H"9uI ؜!ijDjIʶpfHECisĖ8ÙHŽљVSt/ˋ (^b~m񯁵ʁ䓷%u"mpp ПVYzG3Oz.0pC*x3rel5l͟WmnG,-ÜG~Qw4dٴI5|ɽB&6Էk ~\TWV?x} } }[nov92G?]ϪBko@{G!k4|M^ֿFʵYw0iC+~^U|J_O'Ӊ_,Pcr/1|* 쁟^_M\J"T5>pܥBkWj8QmrΝP<܎q̔i|?wKpr<*O~dj:e.aXڗ| `T(}XȾs7+ϓr-+U_UzXƒT0 wǜk(L0ǔ;KXGـyy I*w=Ymkp;ck64rgܡZ8ͷ{GMyf۵&7ΟFՇy ^ D:+R"o)kGL]d֤{ VgFtŕCe`>xEn%n1fɞ9>ֽ_<6-r0yәr!oqKWxpЭ$Nzf'J0Y[7dY`bg/H}yȌ0gíazO~pٝ[aiGI`dkfHԉ'GDJ) שw5w=n.C!Wg0)S fFzPgAhC6&7 tp{Y$T:Q 2<7HxgpYR)DˆgĞM,ɨ,-J,mNI/_/. sNޑȬ-[eDk@PZ`h &)ZWZUD&79^LJ狫fyex<(%B& %P-L%ʬbN <1wogY4yАA-s.Kk&\ e{vٮ܆kʒ NEDL)e%2bVR]*c+03 9~I2Kn`%Bۛ02a^I4?&{yZhh&5$ďvnTÎ_xUU]oyg䭛NC'N-1L`$ײ_~n@BU %+fw)ePFD\ͳjgi۽J}uR篾+A%$ȅp:޾z;y-,xv'B:\T}ubꇟ̯R/X v%wfדt d4A}zXg=. ܞL?4ֹMPA8B/o롷LN6dCWW\ њDtpa8EWK#LW*gjjrE]t{v)'F | Zs+D:OW랮\)V,BZBWvtutŹa\eDWl Z ]ZNY Q2ҕTٜ|WXg]!\ JNW$*+\a3?{`2շ wdAiAɀJO$;T["(޾݂Z~k_φUCI%ry5%NQ8X8O(nS V;iݡax>l4`\T)g ߟeAc1R+8wrBUUfoë+m~ba Q}l?|z;ɦXKؗlN},BxQ&Wi'1U+K/[.%һIeF0Տr|r.-/K I(0'eɝ `eQBD yYgUa]H#ֺ<,z 'KJq- \Fsq- Zyw-kAIƸɈ-HWgh:]!JK{:@RDWR ]!\Nr+D+T Q>,H8ˈ·t%c=] ]YE),*{̹UBWu>QzJl8(]}kWj_jcڕh@Wm*Ɉ·5t%=] ]1e eCWe]!ZNWОf]`}]5C+:OWR鞮 K ]!\1h55]+DIW &ڹ 9D#ÄDc֝@mnm^Mgw+Hwf+ެJ){>@V},'#K<B!JkHWڼpt3\xfhY Q~ʀ"XFtM>tpm6;VPuBLtute-e2' s lTvЕpܚdU{h)<J۱h~ـdOW=Z)]`t6tp8ZNWkf=]]qB^XٕxXmehWۮ+AP,#B6\ >w Q~P+I.Dt rFGX8ǂT]`)d6tpe6~PDkX QZ**cC4BBWA1߶9HTK3+,Ɇ*B TԆC쟮| ěbt qf(MNHtzv)c  ]\II.th:]Jic+L~\s+Di Qv-wZOW/BW3.MFt'•hWV+A>EFtwpɅv(ՋЕj%Qʳ!E4b*AɿDWferr`y6,pE660U,(uT$K ~gXРfin^1X6z"VS$ɒ.p-TTE'QW:X%8JlH;sT3:iRddAaZfDWXdd#}Aht>PJ!ҕ]I 'l+D:OWR۞u޳7U.th:]!J{ܾ>M3 h8=ap&SKX~_:&8Lo?۶¯[Rه٩ah|aK SY8U7:7NjMqr̎uvhRͅBL#ix?lH}\@vcQ$!%k zr(5hgDgzz~Ή( =N?>?ۧ٠KZ,t*hfX ސLE`,jm7\B<騥aGtr5LD/]YBPSy+6'ʳ^ >;CRޟP$ħ-F&*LQ~6 by]ٻg߽{43~W_#Jʢ$&cxD2OitN>=O?˚5Op teEՋ^ [_^/FP"En.͇߰{-^۠4aLh>>o%7ܚQp7M?^Oaqז|SaJY)B8 u5v5 OOcQDl5*:%biSAH1Q0dz!0jQss46y|:IvNP.3?7>Uę0q`^ѐ8Rp1Eݜ,^ Z^TK]E*:&9N }-MN?5Ͼ(O͆y6h 4cyCV]Cв(ƛ0]-,Z p:MzD2 X 3Se@\A>ޔlә6og*Sɂ 3AZ%ఠxLC%ʲbA9j1#&qʥ=aآVyh)n[ Y-m=3H Ì qƖyfk.rv9_=+]`&" 0'~3aTuU Tu,YkMb:T|#?ur%i*'-{Iɏ[۪*~]M дFgj|?o}b3iGJ) T1%p2Q1(p6aaR/1",DDꥦ/O ` Xy$7&"Bk^S(hj`"pSLJG@a } v^A'ucbX{9-ϸ~쪵!x`0Kf0Q; aecO82mI+5'aI:zbkIh @ ВH2>pԷ J =!"fhqR"!`w)s$k;̥5q~`, ULޝ6 խ(d%Y-]mh +'udnFw Pq#X SQ愧" ΰHP<x:;ly6A3 $=0@PK] @iH{Li;7>2/#3 w $n0Ys3uq[$HDF` TT P>^U;fW<98&x.ꬾ5_.ѣ#1XWݗA%ԽHQ۞ -6@<pUnp#S+&v0~/v,r025縐%|Pnqq)V;Bۣ2%=r8>%hߗZ gR{Y]JS3Os'h}U|C4{~l0٫{?z{]oqQx6v<Ӈ2ŏyLu *gY'ףGjq]/Yoc_Jt9O).Z{Ť|b0y^E^0E֏xbe.Û;= :2~5O]X *JN'섡Ƹ>c!F7 k3=y)MO>-kht\}fRTQLeOMSʵƥZJ#(O6&VmlZI)n0dUĸ  .MIJm@ă \<{0g mqoYSM!Ӓ)E 0û5{'7dKN}-u)d<9㐼ם齇tS4x`[cR;ĬW逰 ߜ 8;^k} *M7`uuh}{4_ nCo8N{aԛA4^Pp^oi۳/h|=@v|nC{6kt75Q7 l_[n rP+juZ3Y3v(V'T"!5q-(j:Rj3fȴ1KdǣVO}vu,cY;YvH-_jNz"EDuu9]~ȧmŚ 3t 4+)%k;:=^ס|ű|,RǸB<8>˨X!j !FEn5t#nLG eZ30豉hj ihy[5qZ D,c&Hr9R1SD m&:%L:u>Bn4On^ bZ`zO0qDy>=ͧR`< )<&"FpV=` 1C f1PaicEx!^V|۱^6L-ww')Tz V*Qq95b9+;;Tn~]t{#ͺh*^G|>CXiݭӭ=J>f|I>Ͱ ʏL_F\X AYF("Y3ʤK+[k.Qvl_ŠxQaNQ1B`F:aM:&_)  v*tCr'ZK(IIì`^e) Ӂi]@Tˈݚ8ێGb j6Pc1Ɩ#ǐD IPBRũ" Ai h`Fg(~)Eg BG(`(p4eQ#&+{"uݩNwQ?'Gn5ޭpҰw;vV++=kLR2z 9^oF_YW9CV9l"`KB9JC^믝,9A>,a;#*3 Kz5q~bN78S+Z6|Q˳y(Is|r4-@7?~?gDs֓Fr1%\fˉHdUe^>ZR{ fxfݺg7[XskW2C|WGx}F#lvڸ$L7' V}YꝙԚHܭLr91b*=#$|Ȃ)%Ẽ.4*udhc[~B){b\@A1* C*}sƯd7wbRdk7{Y*͖Z55[dqߙD)* ɉ995jlB aog4S,u9当Ydl/ W9 :[3 ICJ\ZhDE4__n#: 2-swuGSVz gbob$')F(12>goirƫBGRqG٧iLӍbE(Mnyz'R[ J>[-CxiTaV{iRٸ>C'33&آ=k:Z2TT.rnEJ1h|aϯ&/ǓN­^ vK:lkm(Ab"lUlf^Mw8eydv Wς ڌ'dwgZzւ7R?VF-yy^Goe~rv,˲Ź$*ÔkIǬĴH+fTVwu?ɸ~d~IPk9D.<2B0HE^/Zܯ^:?:SKAYeUؗ@uhG;-h Fg3L.< !g'99Yxs] }֙ߙ gc{.82L89ވ^ִY-kƬoƹ͜w\|ӽOˁ{|hխ.r]ՈiX:v">;8'296mlzr0_$_n vӓ{#'{o?nݻmmh~88.Ywwݡn޴3C5--7O;we/N~fr/Xu3+"n m_vg!B n""ִNO!d~i yd=>ypy]:sma>_ Hd!qDutիdUmL.{89ڕF顃 ɥM 8l8ŭF1cd %8PEgR<*FcM.k:ᦤ &LS;pveg3zgU.GehO}/ereԂԬZuz`?!aW4 L(ƵL\И?`_'_{Gz >x@4zOʳet)TDkew6'`gIůW\gP@kM+4~t.ߔzۋa;Br ?BGz[{Po=x7MkXVs~Y>gXZ?]z2,M[RnYZ/YB&w?,ME,hk݉BOXs@%jEL)OJ~0eF'd6Ɍ|M]R}b?lG.mPmM8GuKm;SNgӑis d[V6CN}_TZ!#d6T}@9Q P} qPqHgi!RP'Rpe;}S.31Qv9U+`Rd6SY0VI9+b4,o>"qͅȋ)TtQYCY|6<TKm&lwmDk=g jĨb`maP=F/YYtPt ĠTGR)(=+`ݹm{ލ AӛQPEùd_)Yo"vA5:UUfy1٠-;0E*0 ||.򿘨o0O?DbJSpkFg)^;tR$Zxoߏ~ '-%Su l=bD+VR~T"2GKZhBRDz886t(x&!y|3gL+e``hDL骨e&J]3bFm.""}kte=LD6n%zISqRDk[m޳%Y@^IƱOuTuӦh`r[pk'yݲtb䒮[\{ 6,7$MiIm ??9Z1;vX.D@NAfdC )$<)$<4)$<)7ީXO#b,^s+kF_ZPɗRHƤA]IW"T(JG]m.-'WĖ[qW\횘( Uq Bu\|`5<毧$Hm8Z/c8qБU7ttl<}<48*Ձ#hdMVhG X&EPUU Z%mUH%he\0ڊ_F4B>3Xind28gEٚA<4c`ݹɜxt< P4pw{#jo6>N{p:91^̇?Q!&ͧ)X UP)ciۏH3E}L&7HO(ڝh(m;4靆5 _rM$w b/x9+<ӯ߈wL¿`t2y/ح{w~0^S'ET8U'O%NiIA|g`6?YeST=+%ȪO޻ǣ(PX-яtϜ_ݭݞd^n뢱ZRp㝇s3EáL4@ڂؖoB @Ԫ`hu&N:0kd֪N-P3PںI!!] Uݯl}99΍w}$բV~x֚ *.՝ ΀JID7Llnd:kt01s+HjfШ |Ϣk26&шOjhŊj`Ax*+ X>O!AЙ\ r:p&E6nPݹ.nTѯ:j./a.dL6x*(g"hV^:RZ!Ȩ)D=Rhޕe < AKl'17?$,SlUIH#cцkꮪN@- %`bB$Z 5mH6…_W-Vq$\@KeV8e i0BEl јI&rotÒrg:>ƹG3F8lTX3c\USP+5X%#*˼Јki>vH>vjjeȶ-AdIp3X  TC6G(B8ӮC$ˣ Th.&AZ#eL#tL!y k,v /NcIƚI\&(x(CQNyR &=k̴QaHI:#v2pPBq{)hkww5wȺH X=/ff+zppQ%0~UD\evV=Q{8j+..A25Au@B<(c?ʸ=4*xrrF\AA!+σixFͪ\(ƽ{.{:)BN}1WdbUBYr%5K_Ga$Nh/w~Iv:E`R?nR~f臼2j T+xx LqY̙C:<N,dVoiS2:?5-l#Au{̪:۾;(MʥB/TXB-e֟J͝|tW*"uTY6^~o'*܍za $Kۯ%_e.e0!x~OgH)!?C?w(K/I'{Qa꼅4g59} Bo𓬛|[a?Lj{ȞOiO7$Y"X٦F3B>Ey=ExK3PG**t@rL ?,kc<*nqjie eB@AϊiwR]u/EQYa$[⓳ duf $T S)[鵖W4]glD(\αq۔PK2qۄhuބUmeӚy@6at!5MZp<3j:R |hD}o}v כm*B *Yr2r_MF!NYmj:rxf$',C)NQ1~o .ZSI$Elf(|l(*GӲx[|^jl&Bhd8 dvE÷QhTߨ1rq4x.Hks6)*HMD9z!6t>8[>NPsT`>+j#cdl|_boE1e|yJEfڄ/ ;ju/-Z0{5UfAy}4 6V̄!orυcP$$Rt"CV.֬\HR֖ IZ}BRʅGrAH0k\%E{*[*IѡHqc+)&m+`JUVPcvJI1i\%e{*ZCT辿Kpx*Gq @\MZWZԡURԖWx}C՝V=0\Mo:&| \;:ծJ wXUWH0=tJR<7+mʮ@`Qk*[]C$%GzpE5WJDk \`W GzpŴ&Md*IId1W Txw92ҡXӽu_nnm" 0nL'qqkHpC$ G~0-G@`LCT=?\%)1|pԨM$8 ,Ek*Z])>xJRcvJc$UX ӶU<\%)kpTr+ :񇅫IKhݤvί\#\zx-ˍ9hOAo||64q2KY |c]\L׷..-¯嶟ba"ɠ\9QGHQ@u{9v7Xk+ +'h?W^ş7(7oJ{7mӂe+%G*29Vp)%OF:ji$r-hG m `@Ͻɳ0x.7bqsz7_oE1\tB}olxuUg7u='4 0y乙wORf^ 2^]a!5xf!B0#18JO%\` It>H;Iwgo/R7tuC屽Nޤh黓Үߝ܄79Ϟ<S ͋3ʐ*U轧7-g s'Nv6t[ǹ5)S\eY(i:iJzp-wثc/Fg~r @rߝ|}BvOW=_~x sZD)3ln%9 ,NϪeI~i "-WT.|D-Z븪 /PG\pc 1pڟK ..LOw]+)x;B\FZYyS ZEInˋD ^x$j)^(o(MDPks61r!gsQǩV,o܇˳ KA9h']!\t R0M(rneZGEdwZ)"V?"DЌZ$'TtB(xu4sw64hWy-31ώN8 0hURW'.2M<5dr6ֆc8+>/Ɠť;J4D7vr^~<Dvޜ<>~syqn^?Qbm3R%)z e Eg/+]$-O9̧`ϋTv敔i7Y]^FTʼnO&1@&/BB@i}n(SA^%Ư}`KٹW%hl/a]BL- Sm)IXv 2=RHņR ! հ2v}vHOSHr'+ic-G< A4hEzTذ~y!q6{jB>|=*{(Yy0XE>xqLd$ wL2e#cb1Zff3Dˆ^FcD2Y+냉2""&ZH0Rf13ٜ!V=YD[!r G˙7]66nOczJyf:9żo4n`qHXʢzLjdyÌ8%<(& 1F=ịUh4BkM̨3hQ9b0XJ#=1q6{<;t]W=|z ֤`":/Wox;@EjRuo|6UM%W1K(jY4M_þ3L{ s{wCQL>&kko#h-[2rܒC;=q˝di[Z TEgNQ)LCF Bn bj}ueW9KIX-G@U#DB9lW \sPa*׌bwLJ]\!qؼ34g(uc;Q D8F1R=)dYa!Yc GKK($CTՠxaImc}J!tE N&xiJY<;R_8}8y*b0hq1ndv;v7HA|M֢mոU"p$M)0'aQ0 :M(W@$6!`B "ZFe)A@.kL#rXf r0y\R7`56B| ޒFA0 !Jg)lB5 .CO-Vq$\@KeV8e i0BEl јI&rotÒrg* #YyX4&='U&yqpQ$WخrHJ)%JW%g@gїt\2pS*2"A B7X#-S6* `]PkNJ^}! mgP|w gLshI׎Hi凓J@樠*x% aI*}plrh"|H*-=Y` B9cp(ф M*wђu-σt,)  ;iusnl9dp~<SѠ5 õrJST*qtr2rýNgd̆T `IǮx, ȹ),N&9g)t,"VޙsH_$&m6ʪ !?}3 V:$IK4 ؠ!>A` hQ=޻!i$N'2WAB"6C&=;1G&xFrxI`ϦUN!W*PUNRU*K#LIBKoJHHMESRP%Y4eR*2gQka :d,% JɨQ*_70oFcx Q;A$eT)b aDKWi#9i5jFbѡ1U7 3QŧyyBƕM^^B1> g2dq1'n:qXd [MrU/O{9; K gH Z1.$.Bg,1ˁd2c4%T3DzFHiG&F&CM}&,0c!fL$zde9F嬕/@̺;n`9*+'bHAgRK0* hZ'ZfZd2*"PS޽_*;'nrIPЃb 6 hZwPmϐjUZa>:&׽nPdir$IN.IGF;ʙJ$fz⨤8+O<ߩב2u9B 'R,) JlK[=vyRۗzv,WE.vΪI}R}r,$vBڢF|,u|ţxX&"8d_PC}NYXhHiC^B^k $J\=Ql/~ "J(u:"Rq*R! \5(@H 17w@D{CGyo뤅V,Jz8c\2yT~th1@Fk< Vv{2_9 |Dlb,=$SiR5(@E<'kW82FB:^Ӧ7lyMmϗ^S]g[[o.sߟjeخ[7=:%q`ʢxyw?8JY$ 53aS*GO "2X Dh(̲rHF*(iPFN?'}.J<҉%a%h+zcc8ץg4d,8JJLE"H;^x ӴFÖ0!V8WP+|7OdTfġ;+&~egE; b:&ux"$5 (U휇9teW ov+[/ ^J8 i3y@K>ZAYmGmކ_9tJ?Mu:*t圖~hm4I ~^ro-bm~'[N `* aDy`RM H0Ѝn^Qi1R|?a|} bCB6+@jVC'Th'T$iɮq5Y# 8bnr 8ei ?Wzu+v=Fg67i=ҲH+ Y)#zIxv:=Q1KJ<`p E&(eVڤ!aj <`]M0<`X& pȥ@;F S6TPbLrIeU֥ҶT )#B\,+KȘu&BT,p-",700kpV#N }LGSb3wmΚR OtUc{['i~>z4S0~)Vg \G#?3E4R&TnLT[Dz!=qf3{y8I#=X%A'T39((E X "d6rlz\K%Kh >E3DqZ89wHZ.m<ϵ+wBI8+BI~~dž{P*Yxj)ϨI tBjşK7r Qi;|N'8wfqvKټ-"3 Cc HdDImؒ}fc;uFȸ C< UZdGMŠI-S{]{rXvc}&П,savH>0B5@X<.ޒ\.usˋIXz'78vPV6W{힝gyu឴c]<0Ia<,[^r&e2&f-g:rvP]`;Žy'~\:7,gilB[KRV0dNȂe&k2h@#sI_uvdAV(×(F7{&݅l,Һl`r,h=DCDdYiCɭ^W80fiC_5k5޸}ew> R,|VR|`ʸu1eJ?OT$*;Xo\g(bFDlR*YwJRTFFIb9FP$jf޵q$e_H~0|vvl lis- I$Cs&EΣ~}]U]]|ҀħK2F" (0ܺ3sT-飶4(LH/'bwanhH% hIksET)~=^zs{MyEgM,`y{  I4zt$cIýJnI,*.UԌ=1`)W:|Q-sgePUb`Z%9mM* єj "`"$3@ j pj@a4LuȴvS ؓup#6$.8*F K^XOs3lYCd$t51e 0k-MUCԄ)Y|¹5FN&G8?@ ~"5;v2QAiHiQ{WU≪H pV1Ψ%RY61)76=XM3ޅX\[ rEBA}g&gkM 0/&[0ΰTvГR1|K)c~ho~p"_gP)^z,sp5>R9H32xJ+u]]9(u4#X8ыv 3zJQm?x0*/ E\'o_[?廋߾\Wo{ +Ο)`Fu?`E{EKTP޼hn֕r7%P-~r_&wo{bn<,W:B-X8d]L0{MllZ5kraٴrU(İ1Y[+T/ఱsɀDM^hu2+d2(L$.RGm2d{kRVhphAZĹ},pɌg$ZdRPawe>irȞN;:O7d>;;wۙm(u۹Wity@v#%qO6#sF:Z&*r)%-3N}r<6eCRMZi(k. Rt ڥ[C|iBXsI -I  !*(4qoAOJ0sRd*-h<ǵڶ[&wIp߯1R-4T|z%OT EqihKpL2,q wuz1fw$vOΰ~M Y1>׿O xsG̸+b=THtV:>zQ82jTԻg<701JI.4ecKJhex+z^윪xEl 8օH 1|TyrJJ|YhTT7j) )E/(18-"Q ЙDI]rgW3հ918 %$BI2-uќT0L .B%u{FE*׫J_|gB )~ Á m:-a`,:\xRe&0_0z}g.NUi~7v1](aP9քT Oy˥!q{xc_o $-f] H0&">f)}ɇPuv[%Ac('w:plI/o"]=,|I*9 Κc$e^iJ%S v9Jqr,w5i L;z nvxq rͦU]L;Tgt|[m^vcmQaɜׄfs}>ATyry|K+m>5G]yPy:6J/:cA汕Z{Zۚ_tzmW_X~MDyx4z}OG⚍ <fKZ}B8FZ?aoԓe}MC5  ?w'J##pvv$Kz@K.6Nʩ{nXRdQ2NX:|d"G5^i e{>섥y&gNq{+!5H0י'G[HDJ)2(bй7mo{\e_Cm[Ñf$X-ϙ032uNEP.E*83ILJsG}7dXyLU#0L9;&{yQh&@G qHLܨ8~8s-/N~os77b_}>6&ۇQfkWlߟZIaEP2TELȰ§D\ͳjg۽J=9of+|Q}%A+ r!\{hn+^F^47s1-$2Tqse+0բ6%3ӛp22~OY'^rFcdۏ.4[r;w=3tp ]!Z-NWȞ!]If]`Ug Z=]=Kↈe vuSKk rxV"7[a ׂsA3_^}SG ~Z>CK* @hX>{jLu&j7~yaPOuJS %jYZ0Mp4rNp/x9ZNddr(*Dk舢Ae>0!# 2gB f9U |\_çnJ'+m&%ܗ.k]e{}zv#wew>핱OwrBD%Ӽ7۴_/axxd{JXZ-EaN&˄  ޘ0 q+;qho EghZ\Sѥ}f ]!ܯ]!ZzK(gHWFD ;#]!ZaNWRў!]Yt_+lDg ~} zBtG+X7zNNa5jw:+9;]\%\/fF7{rPn XW^jazaX$:PZc)x \Y#1s6lBv$m+Dٲ(A=]}-pd 0Sݱ]!c> t(ҕVl7?*er3Z4T̳*[f3ߘYS2$k7Z7r|݊oObkDcDvϛe%$r'm wi5hY0Ww=+f߭x.X% W"\]q㖅yX,27,&>#ΗC`ҝGp3ji\fjL&p%l+ƎNVSx4=YfǝG8S-L:QM&kbXW/R\$$KZr,j.xہqElIq} `$El|ke6U$sqWd7[w1_lnl抭'{[1/#.>FWNo>]O˸: 4g%ש,b)0KhT"C# Zxٯɋ燷6WB@جzi|3̒/iw ʘ|>PkP. OsJ)5p|[=/ot%<I$adY#ͧUtRv6&A{<0zw=OdJ#b#6t'Ӫ)ں0Y\J" vA[^ Z(Kq5r|?{5L1NLA/pμ0p5\,_ꙑ%q:sĢӸ #k1%&sb3;dX`&l !Azc ψ2J *uқ562r|Ƣ\@T)ëʤo`[TUt9OxʸZ &Jĕ|A)͂W?xUѧj~xY%o_7:+oeWDAeLeCF_f4Y2 jK㴤(Oӌܩu=}^PB*QK)eR2/֥(-h bVI.b2&$c\"fhǤӰ`]c"֓5ny!klDm툜p"nk,m#5Ҳ=ٰJ{hZ05'=̗IvNpL}if4a*( Y)P@nNnyQoN q2SMw?sni[EI"/>sF/J,NKC2:d}d3.KQPް3X̆lfN92Q%-ݎ4ip4*->j1̿>S'3}x=;uS^S7eA^>P:dw$zjrUqR{ྐq'Q3eT8F瑅:w"d)R2 #vBЀ4lprRڹi4u?Ŏb<Cnv*'٦ Khy5Y"ҡiЎv[֥(V/ߛ(-|]g8I NȲv~^Wj2*㦗Nhoꦌf1Xu$wA&< qB#Np /3#1ބ.H2SobYYXn54'dwD#N p׋#9;(SL,+(@5X5:dXzI/JلcQ3a,THT@ITP,н&8dL+:3:=CVX}? ]oR53 }\me( u}i]mS!OvSCw+GH"+hx-_ ՇfS)G+ NkX_f|Q @l<:M) {kqn1jM&eYPj((ΰҌO-!CP{}x{Uf0P}_0x߿nNr, Y 3.Erp-e.Kd9V"-S,R1T^Xs皊@9Ip.p74/q! hq cQn-&5:FW~xٍԷ/_fܽa&>n=vk5bL?-'RSR*x$E',tH8`E҉")㴔@ d&̸he꒠IzؑZcWJ௤p3S䁟۞.lľlٶL'Cño;*Qt T1-ME6WeHI ]IzῙ=]$I4 ǛhQM"hQKAODDx^H|2ԓjыoߛbKx;i &2"!cR dC+~pM~fSW c9)ƅ7[NKn({R`Pde8K׳ їjo2.ȲСt5*c?W㦁ްn'4uKK"ӛBxJ5vM::{+ة*Tۃ,4P><>m`WvX" avTLB|N#Rf=sS& .N28bNprVxHe0J0W4pKĖER"6vAV~L7#%ঘ&A*>n(㐱&Ėջ{Їq/J-4!5|eG09rׅ.tv۠)tRBׇî6v|QoLjo%Wn* W$ӉDߗiYZ,9U)r Cƶ :0`ex0`i:ϣj6E4KϟswZQ,&ٗ<`!3>U9ޥb~[ _j>Ï֏9Ho owDx,sxVWgAD+o37LboF#\ű3@i~g,k?`jL&p%l+HZJvMV*,w\3Rt#FTw\@. .  8"/rl@HzY-[, (_9A*D$e_;T]f~ey^!ZIY yރbb70 ј3qswo~D?B-8V*&LƎ!X[ .ȳ/h>.'d# dkƼ;%U챌9ܵ $kv|;`3O#99y}1jr MeRTJn6/[ٽk7$~`{" [s*(D|&b,E q =J\2EBϬ*`Qyt"F>8t!98k m4S6QHnBXOd1̋/m$XWd/33Vn "Lٰt 8Uj?{8fJO eyht;zj^v I'\*~);qd[)Sd@IU#yD|w3o``?v_1(n[׺T[RɶsF+-Ob,T&75v\☏ }76:U }i[*_Ioe:Ez 8c`,rp9 艇Vq+\Ӭޯ~gƎ!b(OEu)/]`sGqLCyx-^eq, h2-r+Y[s*ڜ9Lxb(q u}K1O؛bjq 41oI9ń1WEOz;)|-]2Ȕ}+"4U(NϳEB1I C<4K[w=5lQc@]G!]uqN%V% Fa6Kz͓6av{7" {4ś0)E`S2¡GdUȫe Bصť|.>$ Exw"Z>BҲc3L@qb&9־m*A?W+ĉVNiv:>8By)Rq~k?1c!b^N'W Q/j ,_rrv`MfD#J@QQ`A6ʡ`'kIɧȗøb-250kZV:/˯շ|:YJ2RW2LffMk 5M p.F6zy^5?Mҏ~+׿w'߿ǷSG4Iz_F0rl 5iI`+Ph#?WWfݪb6⊑{Qgjϖ9(ZJ3L 3P)#@&RLxS!\t62*sZDy8vl0Z3I4DBbc,RA2. ' e`o vOJBԄ)Oe*ܱQ%X`:2wH֥@4A!a2? (AX#FM GهZ*o*gm V?Z=B4" 6[@2 &fKN ~b]J [GiwY_M A 3P1bL"%"P#[T6rRN !ɸtN|NAE<)| @2d01?riHwѴm(aj#β+ RuwV Zi52.@DNޱ\TۿJRPh$V 낄| ցjR[BH%bKX@my׌K]Z#2T*+>ʌMyuZ1E\!OmXD9 41 2 TL5QS]X\W/@*BiC+zPeW3+F*464RĺIڌ3Ȉ"LxQ%l$ԝʐ /RV%iBZaj6Gɨ;(NdjHm2UFN5U xj|F4YY>ChѶsKtJѡ.oY#UC{ڋ~[(5 !1oUfG04^BQjd|`[_k+CP? X&UDRM&gC^ۜ̓lJ*% DWevYegEkp3.7{GlechO]t]u#29NiEsNOG ;X'#FDhZ9RHicWwP|#%KVF ߞ}OPuwYF/ ^%`}jZu[sGdiTUX,ؾ]R~BZOZP$Xo?+Le۞hoSm0CJ_%{F)zSce ; ګ`ԅ>+BorzW:s1_FU'>6]ӊw?d6|w4a!Nmz/Ba{Bdw49GOC( *Tׯ_aBsH$PJsOhzh&fMu<{]mz'FLf}#5x䳵&/7iz,jN_ Bz,y<^32x2wL0;^HooK4uUK_aaFӔ&y>0ap7xA0k?X~pP"H2W 7x'S&l\*H)Tea}Wgn 2Ǔ3v[-F}?l-bvB܃v3(οLޒ[am4i`,uf0ݭz_113,JD{'d*)>ϏŻMN. z _Y ,,\i(}y~s!·^|E. Rm&5rʗ vR˜Ա1iKni=l͇{CeNU{] <ei31<<5OJVߨ!Y DcU[1}M͟,*L☛ '\Meu]0NE B-o7)iwW,9 4XJvMe]Dz2Z4쒉j. }94TH|e7MN}0U-) ʣ B]heWGW9]zBi\]?3 McDe @1.iLN0Em.úV _)G;',9 ?ϟJ[%HK3 3.ȥ=gAb&I# w/22,&LCU2İS,3&/dJ * kD!8 pP8aӸA52δ|3iR[@"D|S7+Aؽ_SsX%`qY/Eoz٬tC|3ͿoO昦i4WSM3ǿY5H] CWaXᔎX[Fޠ1YBR\،ث-+ e=5-f@5ȸH ._uf(_JY}%ې%Z)_$BĊYc~X)j Qb)wpJu-J[{>ceDqvT$!rʂQ#4 C>h૵52(gQ;Vm/5U8_㢢=ngec2/iO ҕ1$}H夁jyib>jȀRg&$FiL~_w.ee'iN8d4.hxYHC0´jWQO\Z5o|6%&vz,"Pn7XӠ,~* j>,P>MJT(Ф@J]@d'Ѫ;.q<疈psi9cd- ۄRY0ɊE&Q2yYhm cH; (b|U`5}Jl8TRMG{K527^W捪q(/m1orR$̿@@(F@Qjd|g/,K 3L9LE[츠z)nT$HqC)Q$*GR ؖ- xsX958`_&Q4Y1ҀjYnAum Pڲje5BX!L Ulm0_q4K7@<g?;8g Qȧ)% KaͤaЉ)tKl9Ri,Qod'lVNR\R +sWD0d8 !Ezǯt2Lv:!eGw?ϑ3_,rA,V 8; Y P֪.gF0N/Kس +n_#c1] wo J($* l( kr0/>WBV%Aܸ l8pT` LُW5O@V#cpqzmȿWR1Ǵx+F5]yc-6V 3rl&@.Vd,OE!3j7olr3DPSHub6+Hԗd;͙mUg9z<$jㅹL3q: ,Qm]ڬU0mkP P(īe\_H i]Qq!y!'X6g{b,Ŷ%n(KL>y<kt^8INx'm$GWz}O3z~jdRVYeIe=,Y#3L*4Zq\v>!'Έ6x2Ė H!K0_q(FռVrōϏ)j"߻;K)ajց>jcMqۘ&kBY 1z~Ƨ :<[0~my7+z7uni{hWMkض[ J~Ԡ =-Ep,Z ֊$H Ϗ$Iz=  NIʇc_(Vá+O!D}!t*Ӻe>aBSyam8!f}3"p)]y'k{YWV#cܼLC<5Cړű¬;~4s{t6szW'Vf -F7V2o"4>H!= x s\;wa: ` KP4$oLç NW%!vj UǤ2P֞0!Yr{0u[T)$m$OB)@~6iXGadGt;/Iؗ#uHhOĒSWW²t>S2: Vl+dGFew@J^?sUp?RK /d:b!o!%d),Dep(̸ ',LeoXcĚv ?$wu-YO /ny"Pg8\-d}#o'J %.'lo-mdO;S|z5tXD%eV ڳW|˯bXg$f2fMLx? q\),GHcB zǟMITZحΆ!D=5HGt0m:OFEym Ӂ9gH˵XeQQlfcL¥Vz{`^\0MxqsAeY-fP^?*/ ɗj駎㶝8M3-8Բ;8ׂE1FCPOǥL-kicȍÞNԙNe.uSϓe*Ξ Ee&h 8%Y2'fT ;RU.W EehV]5iU@3.R,<oZF_U+f2k!m <X ɘ`lч_Qখg3jy;ǹZ{wvi1v`u90#t9KCND;&PͽekOٰK'q`AN<_ vsrEm$y\32*[ƣ'im5L |ݯN56L"!<,; [?ohJ)FɓN"FS9_M;ϙ^ut"jLrv D1Wx5,5]fo^AHm2S,99t{o[)V'a3Fx~8xC|sW_CT09 R`MT/gat/!L H(cVuc/3.%iSp3͂{Ph:ߦI2C%pIT"; }Vph~= ٗr~W?hꍰ m޿LDI p쎎H3㓬 D ݏdy1zX̑\q$z.cFjӧ7J/6N@ $k;Գ]2PNd}6'04ʷ6R5V]E;/> uͳ_!)"#dǦ4)/OgiO\mn%EZ@m{Y Tg(ī$Ҭ?2H# "\&>ƛP N)u`cg4*Ǜ݃\iqt#MqHꦉ17N2 8FeQ)-5e\+#O_f<4$̰\zUeGP! ɨyugծ=x_*.͝RiI~iiuiI 7mWQ)iă?Llgz3x+e3Kιh>O<`Pы3ԦMK̪0cy:.>cZ{˅Aw'LhשY$KXhB-ic 12elfp^iYWweƠRXUBpB{Tk'JT޳`9 \k|s 8ñޢcsIqT<8Y'ZߙK Ej!q1q7g(>ׄRcj>9eWM2g:&~-qNԯs1!WzNϊċoBJ1$N Ӛu=`{#{^끁pD^0(x|,-ٖXWx Ȳtsqƣ8%>VS6!BJ2i`UJ«# 2%)BUb{xo3B8Pڼ.;gBOw}un<6 TЮrNdC qc8>|jY{^>(i5?"? SrK45vquln6xM1!X2{=]H jg)Ā3UȊq`QDUYQVL!#F̫9' ,aqFe VzF3]v}z2F;D>.?v gG\0#c:A]U' /Ck6;p)if^kIE yTҢ@+ϙU >qkorxa~5W|1w??01 s9<䜅/8wL"1\ 'U MOj;a"Q|{~l[$b;UpYi /$/a|( 4RJ=|{u`<@+Jc: gPzL(H;mm-<7WN4!Ol窾fc =AEPQ0~ z@{`4H&8Ȼ1h2ϑr5 =En}j Yƍ {to9. rĖ=ˏLy(gHI'.:9#q j{FQl !N)Drho=/B2M[ A+IUr3w0)n_gFY>Pz8Ϲy&l}Q/%TB[Zi(iA@Z/6FpǶ\,by6{V $IF* OϿ/W(]fߞ_VfO_Ǥ+s[1/c1c`U&2 7[)%[M '%Z!-&!A-wJi- w0hEb&$65]]Ƣ=Rffd;KAY(#.GeLzZIQV_Z՞'2ɌopkƔ*@QIp J X"/ Ґ>e *ۘqNə-h 2tL )xiDž{` $]9EoGeBr3(n S=\CMhB~r$~hgۺ5y /vwy4plrTzI9 #)[ dg†s ndEm~O EM ֱb&< ,D6. B%8=#I80{px{4׉X&hR .7 APf]f!K'E촘xlD)hrTUM( %^7&.Z>N N0KKGJS[ ZcĖjᩐ"$! \ҕ-PB}e:b]5R}u^̿Ϯ8 ]j l0E@۫K^#LvfΦBgsXU7ESfi jNs>kEMN!0Tpk=WԴ;0F @t=l3 ܨηD=%R~t@ĒQR"Nvw|גhzx˕$!& Ch.o/ `upz2)L?r}`iXJ 1cñQbY[#1.,OoByM(Ͽ 7qy~8ƌW%-43pS~0qyg ]•)v ()H^%i9*ƞQjS&s݄~usŢN+q*M 2^SC_V)!5ÎFI%y|(ZU=-ꩀD=x{QDQt!OYԤGgq{B#pmN 3)IKw7 ,' #ހM$'%yR>yr^J&v>Lq*qɑ5ЏSf0x0,\ W"=w/ HXHz`*$bIqT=p]#&=q@^XLq%2I$5g l=0giF)!r[4ߋRoHB/kfއ~]Xsko#MQl: "XdQ uPUqe+2xZ ,:m&d3L[ :92F|8N/Džm!*Y4NkpVt_$@:PyhW5Ų&Uxcj(+AϒPɉ3CX!M؆YcRB @i,Ц O<BTrDKbʡBW"!gY,Ā}LjJ L$AiA3 n6g"p/l'(2xb鍒խ4 ^^ߎ'C [.!i,}3{<0 N7 kUPv;)|Rb=q y )\3j!xљbj.nbSk=WӦhxjOX :oByzajX4}i^#`guI_8O;ˢ`ʌK ׿dx7xqE*v w~q =Lh'/N)1lx}lLE~b#A\1yo?^)v0_'yIDrŚKs \^MxK9xq*ϾaG$Rߤ,}gZئƮeE=ˀowaD,0(!L Sd#[z6Ao?/uð:4|]Lh=Fj|=mN܆v2̞lVJ}j㇞ 7&sۛ<:jcw%Gۏ:&a1U ߟS'B>KL pۈ b1U:$zȲhGMkdLRCŜ[(6JÁE8Ӿ5ԍv4"+ A[K*Z))H;bх|f/tlIٴyPk~R\]4O*ׁeޔw=~ -V8=@1oQ+ڗub+rOR*[faÃC`S|#N٭H:~ܛh&NSr%V"]p]cI %V=G q.nKe:x脹xjyg/9TZT._b^B߷ uK)8Mkw(aX~)(w~??tL?{{:?̀i/;eS.܏yxw93 ޘAaҪd}ª̹fe&%r*i{MI:OP!9y)8P:NػP]+mGh,g1;] /d_QDRX:(I;FiW>C pz>9x=oV17[sWh1G \T  !Y< ʲ͡QՓ/n$651|g#FZpBeXq\`^8(Q  voKW;'4ƗOM@+;z BqU 卂oVV^Q}Yg,^Ѧ[E\4:Wa1z2H d$a8Cc"W7S|<' @ǫkUZ)U=U3]ͰWd*5OC1ļ "F8 x:B\!8O8ʀ'ŔǾP( t9Mb|Dr̵;$BZc)@܅teƺ a4A"7+$C~׭$Lꉝ9c51``8.2*#8^CUr ͮL!WFke$KH[H5ak(l (! ~4lv86fU^4!7rUܒH#֝GL| 9B -(PaJ*h9,^o6*Wt]mOH­u?p(!Q / 0SY '&Tjj '/iTdKH {]n&9pz8ReUQ+ULMMU&r !9Ŀ\}- /N DgõqB MC{Lx;h Yf4ѣj$8FbY Y,PjA46dtL_{! p2Ж140WUx2/q2[ӒTƼ, pICn3ڻ%~Ecɱw+˞SYII$iMR;v4HB@Mz̓z uƊ/?DLilǞ 5a,QY[Txy~JkSy@1>nYAc؉ߖڥ9=8P1͡WYF48bE48d҈#e`1+h,YP(7J>[X8;=;Cfqf6 K@Ç⚿ޜ`VzN\1UߔehpS Gt=FN|#+_AϠn 3.&&!vFF{Y:Ӱ/k=7XY Dx_(X !ljF\ *!Bsը+Dzr(5)?-m$>šS!*U變B.,rM ӢǣZ95Ez_K&Z*>8z mZ?.=(.}oȉZ(AۯQUi)4֚Wk , έ}F0Gp4N\Rze*h;kgZFX؆ϙ%YKj#Z<7ޣY4c(<Y(75Os~n4^HrfmLC0%qhmo2T ܊Qyki/Nt/~% r,T7Ie|Ӻzeݳֽts*CvN #럪[Xͺֱz7F0bzcwPn$nޕvIdWsYBۼ.# 0+b^[DʜHrĹjtdAV8e^bIqx} ^%Qtϙ3Z|5vFaA F#sTwܨNVfh61[_"fB9۫(z`Q. úNҏOS&o_~oF>~&O_rfam+>DqB 8/L?Y˝^i%!쮴M:v;>&0^tSN^øq-rw#pY]?VY i?'Έl40:(-w2 ypKX %N SQcxY8.$u*K}M;И[kʷja6n]Ha!IO+ S^ZVTy}3mBcB&޵Ƒ"%'@Hry8ZՌ-SlXh4"][3fw}dX<ĕ5!dqޛ~QH7FR?{LyϺI@+e)q"&ўK0R0gk"fu6e]N AWyXC`rkJ8=)=vj r2kO\m6F[OH4m5O e6UJwQ%_"/ÔU{ʡR H61hv)T[%o(MTSK55&kۿ퐘v] IJ}F`&19C$遈 7[_v5'욤;*trxjSM1Oֺ#TZ50yU:Gsj :y%Ii[*gE6ey= %M]}еcM|ssӗv#pz\{;496QNQ'%߂7~+$v <^b遬KP)l da%9,Y*+x*e3ڑI8Rm&>8k)q+DN5JpB{Z?4(!e7/N娇 ~88!ef33svCFު#iu]ٹ?ڋq&}?.׭Nwy5;Auvc>]IN/Q{ό'-Hw(Z8- yCxu lK 2/SM4$PpQq96t@!}mk~/Df Fq[*n44ݠW0z6KS)NUzVAߊ1vOKJy?7lTbf⤴eG,4ct DzGG7[tJH&F_G#F`AN%ȃ;;5.%qq/k\|!Fnga(b q_;.62[Nw[c4s:3|M3/z~Ɓr(Y ƼT6kցre D-%v!)xE(*4'>>|4@eUGTKCf>ad$٢xu0S01~^SbuaКẸ̇Nep-+RvɎ׏hT~`. UQouattqyuRV[cD#I-~rpWB ^[F=/ KKn8Ӏߠ|Tzy^CT&eZ1!()\ *߰jb]i2A8}:ۢ 7jcl\9עLs}:L,Ncr!2=ex>Iv74_MŔ<{_J$woĀc7-8o'zɹz\og$c y!N9FUr x({t€<,c<;!P|E#7Xi]6A!E,+xlCࢤǧ\IK$Ǘ.uM$ 3R/66J˚HI7T홟Q`417Je:/ܭ}G)Hdn@0X*ƴQ˸:xHx=MvVHE˞Bgu +a0 {gZT.\p]1.@/%@_D84 Jh;WDQPZ/.op[Pe?GD^e?T^'9,a*r8,(`Zp!4lIAH<m3,F!b[-)S֮H5! W|?.t^q+4?rylC0N*Fz؂EM#8U \X:(-)Owx_-~c<&My(3/ e$ j#D@[WtcdMzA7T)8g I*< `% iA!~@։OE&Z|Q c^!JDsE6^Q& ӫs<1v $%PX wP VC;[@+%QA4DJC(\ZqvQ#GlI=8_z]HihP{D6:hV onJt*޲3}l &P k,XoxxW[2FIm{ؕtmsuf%Їq-{C'4 /PQ%ƚЈR !XDȳG:p'B%m UQT&ԣ #h8F֊6ݧ^5u%64DJGٴ*2-8%֫FG @55g!hQFK1:@eZ1zEnZýS*Ђl/Dc+s0G"*}Z䒱IˍswxwvA:@6nf J9^%41Y)p1@t[ey`r*5ġ4>S8K8H-9ƌAüwZ :@e;dz,pI.#[õwӌ05#9LHS3~3>AZЅASSױWUW-*:(l;+'@;ቝ33m')YW8$52#jY\sms݂ǟnFy;s쒥׹={ps>=kf}ޥp1~)zF߰^zzq8yϟNNpҼoi#M;'v6;BOggx}zv k\N߭Z|U ;&ϳE*%{;ڧ^|w$_M:R7zW jd2\)+p_Mц@ӂ" s.FdhE89MRתh[-A `V!rH ߹t޴kE7m Pɱ]ƙw wbl|< M{mAP ^yw&&vc=#%9 ?.F91y'H9߼.3-iOʄ@;&5;cQIRKjhѸg jIr=5(mن(43/eNU_4N+/?bpT$65UT:TrcZIi)k1' xzKTICo)_g>F'kd [d$ؙd:.>D)h"h qpEP:Mےp n'Ż2d_Z T)t.tyy5/Ɂ2r94$]q{ 6AɄ.zu\PSLhuhYYB  EO6'-q#6F%cIK"r-5BY#Q5)޻ƒK([^" hշ7rb1qJhg67hچ.sYzI3[yV8;%3 ?f8,zޑAPXG 'GWc24PF/4I~ WuoKH|7KdҍO(!A|8;I<3 K5th4STV"t268ASjq;ؖf .*т0N3)0P> Ow-mH(|݁Gf">Ml'zg3 < ڲ,r7AQb,$RR S$T@={BUgYX?Y5R)BV LB;C[&V9d.Z+l xɰoCx@+y4Fu)lf>BHԅaL0Pe_8V*[^yw[Pt"0{NչSmcp KYzf+╋թZNzu֫S{un5(Bzߋnv9ß_[mmC,3Bm3>>-=~@cvݺwT|'YnE͝xi@HOoK9>yV%X~@o\ˣ|}ܛy jYskULbyOEwP ޣLjz?h5&sȋ1GdKc[ .%|)Gosk {`kUKD(>ULNPy›ӛ/s7ӛμy &=xD0]'1J&7Wyh˜'y8}جU>هJCU7g5B]QROycX DQJFё|\&r, "C2<Q&b%]7P $VFxλxyss `0}O&ȣ/S"j{n0T*マ| NB`>R\C-mZ bdEj$hpv [Rrd_H#pӢ^dD0[;Tnr0?gߚMЫA <OK/%Ja[B/-n?vXc;)1vUby mte;'Fҝ{k}:4-y;ܝ}am\avۧd=Orn25VQF̗ ]{-VyA}Bm HdZ %C ,%j鯮1gPNKR֧K8uN@sa9Kы+ȷjpeCV;U+ ubqschT_?a>P 9/'{Jv:&f޲#fz?0WYs024[ nW$3ш~f*ZXd;mzͩ*H&XFwe9٭M S=vL)H^)Ɗ6. pkC2<'fNȾT.&T?JXұd5M~B-83th'Ȩ_` fwa C:H4K̶]՚&s ;kTUm]O^^,CB mach;l}:P_,)Z)2@9M:dR]mst.j-yЮPB0Wֲ5A#a&rKq7EN/6!~'jlM!!k# _<$p qA& >a:]̙0n({7n9BfC^L8j\0B~p%!I[mX ?aAn{=>6b@Xj>ɰ06V"4H~Yv/Uwɬe\{)QPa(L9kpRs2lKujyRs&5gyY~ע-5Iy9TB՗>` Џe_+ءv9! q?ּ'!@u>:m[E hڰd sR9x+mNڬځAbxJKBox_n@W A]/mmhfc@@7:ZƇ6$3a`0;\?OÒJWy MM^۲%IgcgS9Us$WINmXP${[ }bИڋmW$wWv+Zva;ضrFebYw֘ [J`XNM^\  6Ubc*43SM4% m,%lB'yOT4^,F,d] F/+=D3JEeL ,0ekRFPQďgR[E/H$: ܮCt )*'YAAQ,sJ\mХ.61&Dh6&[Du U0A:! %T =?%;@GhLf/4v7報`SM1 ^E޶dю@c.iv^S'F޳[dMͰS٠)bV,?LY*)T]X-"@N WNZA $WJ .aW[$U!ni̦d$I `#R 8#eh+C,V7g5/&[w^^Ӟ~]8v_/|bָr|ӇۡE"[ )##>Xfھb̺7bݿ,zpnὃc,0]l~7z-x{mIwIYDmƶ݌Z3ig@29A1A+H^qhpzh^) O=@X 뺧?X7BB{̹ 8ƜӶpwPtm,иk I|`5vn`5e>kAȡE(. md\d g[EjGwc^Qp@z3-Pɩ29Ua}07Il5G̯C^8E 떾kwKB?[ v#yLlGS#i*Zz?'vNAf[S?~{+v?{WǍdeDasۘ>-$-ʲg>H,Rլ&Mfbuu"%kYnf>䯪՞ Ί\_r)L!gE=Sh*Nz>vam~Q}~tuɋTpff5#[tj7Cܓ ~֬Fۂ a kAjoJO ƹ-l 2@ߨE ֽU3Gk6חe~`ώ|3{zHU*u/ٖee GF-"X'zm2sdm}֠ )k`rY3ۘ1oe Ӹh6E{ƗU_G?:v-=zHҀpDң(`m nsP-;zErxdزg-E yMQ7g#Bޖ(:وs!v7bwrV1,", g7f1y|A7f3qrOU0q(ّ?ynnvwCV-P+a\ 6>~5Ŝs:-ÿ5AާIԄh3%]*vFCƦ3Ql12^4NrDEj\|)Vڭ.pry:LաwW!]sjţX8N_~, Uj^&S ޕ0+%%Rco,%K4c뱹fܶIL%9rMB^e`yaTťVk/u?%C bQ]&1REs_$`J"{%x/~C[V.+ƶBI~G:&f\-jCʟRHh6J,R4IU+J >7Xqx[osR:La1E.&Rk3ӯgfC^KSpJL.&[-²X,:߳āHB!:OL*q젞\۲ u_bw;- >3~~`TrQajH0o9C4, uUB~~J!d"NN!u. &9(o. e<*brꝭSWwfjS'X|FiOGIָn̰m >W1~!v8h{$N" 2rU,ފ_\(F0S-ÝĒRgQA4qR}j4hM@FQ4kK)\ǺQ5ՖF#ϧWmrp lk'*H՗8?|:, I6+;N%RJؖHf$I˦N"TWZ- LQbxT vê`“8w:~س ?Lm [_Us55WtU<7/Cm z IM)Ceq^ e=u&-ۄe=j%v"=Dz͓Κ{(sczz￾ϋLV$TA)B$sFihEQ4X5Vv-$٠8^j idBa v(O̷m"x+^HC1F'4#Ggs\Cif۷ w%f 2 HHfY5oṭGx]~;I!,G{׵!g$VP{TUjyU=߭K= 6pۣ9G<7H2Qf}OR*+:<1xG>)r\Ft7\]_bGC[;oD6x\ 9{O "{)1әȈDg81 xL׆'"8|PXŲ]Gx+<7ṙLlñmYD\cAB58޲inǤvFd\"Frad Q }F pب sC۹qD6fw-Mfȋ,yU(FA>qql%va#v7Q*lMvbL$ݘfM Ȭٍ~Ⱥ9c\6n6jwSmP;~v7an"i߂mP;g(X26x)O8Bz8!;$J7{Կ^9{B/?e罇+.ǻo/.@ O><AS-{L4YKA;*4S+n[w卤$"Wu*fdz㘂 IHG fk\#ۘMy4t/<^QF툃mGv@5QolvybBonv@_k~hvs=uASg]q8[Nq{e!|8"6NjA'ӣCv˒$[U-ErAUS j&Б-F r"½K%X*P8|= \Z(l-i\BKL;\[8ߤP 8uQ &Q %*G]kjJYe6=kwo.o:;mCڧ/A?b3 lCj3L)Ё"BXs5&tĂ&{\*e [ ZCO(gz{8; )I|xj2 &mrs_) Xs6Do$Ѫ&~/(f6jsnڕWYֻ_/W(6%m>! 0CtJuCǰ3WT;MBW٭1]ekZ}sɂ{ʵa}GU\[Րjt (QuMMhS;s#Ž:$āPEώ$4jרzC1"=j{;csBF\9v.8v7_-{΋8}vjOlkBM!b;Ԙ& @=rR Tz bt́$(#`K !yߏ6UJ7~;۸]UkW[t斞D)r ]bɰ,M$(jƎj8L}H|c?Ha;)vަrːm>y%3,G3>KqG.$2Î΍cQN}0( } GJ}_EbX {[^{D$׏G.7Їq4$t$Lc褤8RL!1mcaV9pFĒlY\EsOj@Gs`!g;Y8#[xU{5 qn8?MMuƱGFH3!)6 1nժl`AC&=pok)w=vd|2,)CѢ?G7.&B䓅e aAŸIa` jTfPh\ުSN#bK] [ؗ9r-XwȱGoLit")9qŬ*l=hd̀$BAd7*1 m!Œ}"lӄ xyOo.n ӧ<dߟnނ1:KsCW0@g/fT:n;^!WĈjgO'Ǘ3CF s|iG0HW Urb΄l3sE%e &6=Y@/ٓqTLHyMs9ɚ>LB?L3f5WF-$߹;DD~4hX1ttSKotNߪyrZ p矒/G-Ĩ|=[tѳ940e}opc1I#SE'ݤs tʦz994U1(uA(< 憜3;Ĝּ%V#S:=:>24L~X6th( zb6cl.wƍKs1s;E*Qd\m wY".llzV'(ҽWǜՄ) O˾qzsЛ ^͆Ge=Mq{[3-iM9=胎1zqCn2Z&z[ޯen81$*FeT"tu|2ۇq::;OpہAA=RWm= ρ΋"8Dz>Rlԥ-O0'.pKGm q/ƦԤڧ<`O ,$EU5O{ͳ =-O骡Y g5whh-*'C `Xgmujs֚*5ɤVM$VW\HKMju =jW7jwYE|('꣏|Vcv٭[ Q x.}dOL18Yż#Їئ٧z$|~B0Rҽ_PE땓CJ:OPX)+||~~uyrϿw 574w̄:r .;MuR<b4Krpa pwS<}Ӿ,LUJ :rcbclyXguo22zn;UinIfMݥ lVAG0At6![}GlTmF}9vJ:VnɣЇQF 3wJvwa.Ub0 5j7o!F^8X>QP{D"<5QF6@9Blvυڙ;X}vwanVUCU@hԮQ)|c {s?Hݮ۔ڪyS@N~NΆ z%ُ5GV۞ô"!|W9U|מY٪/A}Lѥ1JY8 5 Bewgso~kmoա5Q M{)&(uSF|6B-+{x^TV{s>|qTCSM 9xX=gWRBg'#ę<`NWB 95Q%}xƏj!ށ ,-gy 0c\]@ "A;DDK FaDVZ<ըm i= {&ܡrެn *Vx1)|37f™]-٧JJV^&"&HW_(SmG]Cݎ+䷣JGO+4#B@+T;cLS%<E&<. >#O\Ma3PGБυڙ;FœC06jר4nS;@Be{@Ğؠ-G(3m>D'=jgN f$^ٴC nnނB>>Ȅw^n76qm,wp_.دqP{ţa}ػJP$( zUamu-FX a5wh#0*.+05C=hcN[PW*y>:2;ze<&D;MPՓ㱤I> ſ\IjeFǛiv[pȿkщr^Ws,͚?Me|v?=c s{Ӓfo_קz׾|IO-'FKDs7;6aLzQ3ȉŦ˨:M)qJ؝7 p* ytCwz`6jGC>22)gQWô`kGS[8O D\2Tz2IV 4t8hتQDkGO! 9Oip0 斾1ُyj8dĺiʪG1@=y(#;vńR` ȡOS{$$*i7{Gպ;B@ZYuKv<1t6^lg}ZƳO X_=ǃn߼}32KQ?lo ˯[b'VCyxS.*8;+8M4;`(L?~f:J~kj:t2zQDs4#˿BΦ:^]ahdALDu6cJIʉMRف[b{U>rmXcN*묕{Ր\Ñˡ90VS 7wj_ ;џδc{d^&PmNBS.6Tlb"BfOF~_os֩" 0Pʙڻf m jO> кN5JFAZ4DǶMklE%DVHEJs %$O9.e03wlD=0rNa`r 6mJRݰVtקW zۈ&+!E:$G[p1U uUz[/Z铮=Ə}pY-XD> qNQVl8h,q*<-vCFr1B:8YD'ҭHFN dgTcmSi+ ɽbnR#5*|!v?"ÈQpqC3G#>b{8'\?qu,/p xh (c ͟Jci+ SǶ VX\LJ%ֱ:1; ρ9KD0r,3B3{跂M %xږ[j 6k=e7$?ڴ7z -'M7!؛ď}LmMZnˏvt93r޸pf?Ű"mcd2\8+nYl'4Y筵81 jS(>*D9%@ԝ}NPN wC»XN7VfkH$39B k|-m""G^_U,8uLT9B̙k95HC 5_:+DS ꗟǛmRB๕)Ӵq퇿֏3ePQ &Qɮ?}U{<4u?j;42};L]gӫywrQd,~x-\yl_7IǺ%5\If-r'_ENtJ;MP.Jqwth2fn}ӌ]o/ܓ8~jzֻR6RQ+sWi]{u#=PMzbV$b`kjUv/Vq! _,JZMGTCTiɿph S\K45i< xF 䆯Rj\**lЕeNje6cǘpPƯ?uL 0V 6~AR'M8h62WU/WOI6r pyxĦqſ2%RR(~~wۋw֌.XaRX)4U> 3jn۞)J`5j6:x^7xoAcK 8#**<ZI(ש Ϟֲ\/0L9OD-=sƓQe:4s_}ڔ/c{NdhZ4fSvHefdQzGы۽)/s5F=SW|V%h쨉"8UjrY.$d 6褊(t4=x[bi̩_[:ڸ42diOFsтT 匂y9:ṋT(h/$F'@DP`rν.6b I_6Ynbpt_~@W;ML9|Õ:Rko=ҽ 馓_kIT,Ѕ'7f[Bެ;2?>vO7DMyl GӑڄGCN:)^u 6bX㵠P\96O&f6y}Wi2t##P/:]Z]t`$vL_]IӔ6qE @e€z$ `#zVI|߾ӿyqhSҷ/..! yO$X U?gwsp4RFSP \s31x sn= EPQ5_ s;(sÉ仞ݲ齑o)d@JƏ&4/륢^*륢^hYg!hd/ >2ɽv22 M۫sW7پ,0YrAp3ہKd,/wûuuhxәz?e-~7OQom84 />('p9{F7ZC])[|QK4>EZ~mm sw6 {(ѳ'd2OvL&dS$J>Ĭ:W&_ yP[pn합LC ]'wwA Ȳy̦/GG~yuv~DݵaJ[-n w3L>fs@B|7ŢYpR,Sm\g8]$ɃynT ^DIq>FT UDF{'/> MP SI+n ~16lC৉ÕJ&h}ppNiE~MGwEܳ:E葀h /: $}[{g uMx\>Puphl췯tkURJWt¡شO3^9>zAɽ *J "pyn}wtfpWz3`JWSW16lC)N+jKFh45$R[Vy x4䠤U< Y9S4 7De{*>9_>94'c)=T*ى̣:"#Sd7g9eFE,x62Bm,'%;C.9B\@J8G(GS€tg"仑y(89;>TݓzSlZ OAmOUii6Eq%){ȴ}C`H*؟- z^Whhܘ 5nsF%1l 9R4*75=P8(ÌP9G% 9[dsH5($y4k[!.Qٌ;?fY^d@ !2PeOǘ\`Sptw"rɒҴ|f*oxE|~5+S?DԻk TpW)VaQAk]*9+ƀj Pd@(g۪׫mI $ժ#0UiVkMXD#IW H!#JVS&%fhrHqnc{{徭FAE;Rxx xkug:cOH[8ۅk*ioRMķl<4?7}k[xS?__w/ayϓbOŽȗBTzޒ5֚=+[]׉MJ:s2x&*IF|Eu'Vv;S~1khB9:Os`{hk6JխjWe=* jf0$n%'nE%*4xXOjh)'wڹZԖ[GdG8-"s{OkK΂(mϵGi5`ȋ"ӻ8{a$XˠxZ|@YsG?e5U&_,vD/zZ2K&׽7v8'n•5O̯~=Q+ p5;Z<{3FC;ʏi z?~҅r.+>(Z1`:(v?PY>nә[ aIʬZ^;KV+P2{Ʀ)4Oonyw(=<%RFBcf"cĺ8HLa@H$p+@.P BQcA, "SsV haN4I SQ!' ctS*&+yLY&YGѧLJP 3nc/FDȹދY=g -s@j Tb$ I,NB`d,10G` d!XA0Rދ }Q^L2(fVs%t(ʤgI_Jr &o*MJv[ v&K~RlPKyaV6/UPw6C&TjQC*A5AY)KQńhDd s?0[( O*>OهhI1|vݕ CN?isӾ}M^uW]8\j+%&bC,i& "iLXb4Iۍ5dirah1jyei3ΣCN>m"݅]M;w&]߿T0f>m"Ž H}<nH}(~ͷ`ԃYk*̵m.ol#WCί;*/s)s){5+weP^ ʵh&E{{֯dbmVߴ)x ;tKek - >x#c+:DX7f*MOLs0@`VeX`RA\y ȭz(*ʾ Cm}?W,5 lcyA[ϰ`"6L%RLe$aeƩÒ!jWG$sksLK/=*_ŅxH[wR!i(RB"IOH;._ŮG< :6@DDpTu~{쬓|.^_`?9_qTA[Sg`fsQ{sޔ ;VK  n59+7\_?s31R. od! Brpa &;l8l '7FI$/}l}A V`oaAu%zDwkxBHB?Hb&I$$+QoB?!6hZ\8!FQGz6^A=v^wN_lywfC U]PF廳n'A'O&_ Kfd|TtWw=s<>,,|05?5yeBc%`eFCˌDҭ Afvk7fz¼{c CxaDa: a|;;SFQq͓`2JxG*W+w/^g%tfUeW1F/K6-{mBVZ0휅Ŕ(̀h"&b(4.%.,L;*mOmRqH#I9: vI}YQ\T=NE6.aYP1Z$P1$`nWQjp<ŹLvΜpv{[aI*U{ET d(n.ҬUR#dR0YddE<[$"w>W4V}]zvykB(!cg TG ]߃R}3=f5?rOraŴ\CD σ72TLh= =,ZNE3~Ry^m= #TTڭV=9 Ǽb>}7{ #: EݞSt{NUj1k9d@$J2$1eF&3JN]hRcٸFqe }yc^VjQ d8άƀ`p44B)  j$H Τ6̤Bb2bE-c{1@B1BZ@DWe lISBJp%`JZ˔p)O(,f\A(G!-]"7/ g=\,+i<9kĖ ]"㙊"q)|3%kqnJ~ZT4WY>r5ZZf\47#Ý}qw_Z2Aȧ﬋O];g D[8;3|Il#ю)#t&)>WWox_G: lsX%F?^9$  :L1+ѕjgWv6XHo,_' ++/iJ/r4!S\1W @x_-Yj4]=", dYvo`2[* (g;h"A1և)@K,lgѪ9&1}19bzu Uj/cz䵾$ҤD< ŨEjJ;zϓw @#M;:\)7A c+1TbfXWB^ 0LF m.Ow[*6nBUtl:I\GuG穝%=OݗYn* a?3ϊ8adRfs]^V'_LmO~ް Bpj,C[m1θmH_QyxoCoQNMRlvw Ѓ'y.?:u hD}Y]O.=E;P/we}y翏4_b.|)|}cK{i]|zc{ݟZ{3?^rM fן L^^\:/Ee!5]oݖ&F/ fh㜂+*zNsU?X0y*tֿ&k+{0u9cgCGӯ^NjףCrͼ8(w>"+.Ҭ.\<?*/hS|jo.oôNzɗ'H ƅosF&0O_|x=߀i9/ϯ&rTףOJwZ^xf<λ?X . ,t񌧃U/fxTrW_IK ~44  Ze9O'xZ9PdW̿}O ~S fYb%9~6~;~92zť'jΗ߿'U.Y\rw#X2Z!g!˓^5s擁IL5ʹ\1!,R\7ג.0 `I|Z ˅0`n2;.E0^s8c2o%8[+Cdeөi6a^c*ϻKf9 tf$T]\\YVS~cI uA3RvD^rt;-w((X(94se~l|HO&Ʉ>"ZGu2O&Ʉτ T[h,KX)"C ~ Ft5b*NFFשּׁvir&(?vOM" DD 1qfRjw?{ʍA0B/E}`$$;/gq%q|l }-jI->9[bWb n야dog oWuC"XhuN$'yoܲS,{ d|ph̝ N%W1@YsJ ʼu``[uDoU6)nf>S?{ R(oh)ڒaVAi[&KUAꤲUp6iVWip6mdn \r[p+%.9p.[O>m%W %%evlLc#3 SoJ*@%bõ,d.(\1Xy Ct=.kL-Z`})ǒ`}',;`3:! W?8X58Xk nSdaGq !eBU;aqh&Ypy~m4lku j ,r`Yd"9e~@!+&L<ˊa%G4mQH Q8\K΂? \7Vg_HJ7.` ȁE,E^A,r`ԟoW$-IK#X)+++X,I|@$Kc4ʓK)q;|unnu xR +(6&G$7wsQߺ'_rs6-sm['w{=|OCX[&8L`%J 43H\=fv>Ɋ<9|FGLRQ/:}_8媤0\Ow.W Xث}іZn__N1FWTE"h^ &JϙY~/$!ۻng}F5ﺞ\3h[V/K1şPk4>-#-D=³AAyx`%]Ot"yB-J.lYp$)B-Mg!zqBoUt "O}Y`œE]gztm\5Spm^#zWNQ'p"zKificSݭ'[BlkҀD8+dx_ Vً!n-~0s#^zo)%T'?Dw^Wߙ>ٷ~lcnUa[_W}'.~^/>iٗ;qd5fK6qQH&g^`j[xgQ9 Fۉ Twf`)˝Xurɏ2Z|>^tp?V\޳39թgQ*ĄY V}|ɢ" S\ru"{\g^kŪ;d7SeXzt/`uQO)3I;jQYJA)FRL>™$EAsNum$ I+ϪP\/ Q*xbX%/yFT1@DZU1`R9QyTRJ ZZ kLʴWO4*΅z13fߔV\$_y]xJXcW"OVʊҠlKÍZ}LSam\4jh[ީki0/Ԅ +ǐ)eC[om6 0ݹ ( g* Kt/F9stuŖMzHfs]`v2ys'(<6 [aQCp ,n*%SbԬޣ)}&5εɛ2/d=Eu1b s, EjN˵ cTuf(4h,e>I/4J $fDURnYs騱m}4ÞkG J/ٴ m76 XT:کɐobK_Qsގ[c1UBTQ;| ^i@S0rUw}?><_[90bWxuOi5݅^]gmZYڈ[*u6HϹDפPK2Jᵑ Dωf cf+9GI~FbtB`T*GGO"g*#EBSYmAHi[uƸ~_ _NMD Fx*h$r$m pF+~im1hldu=NH?-H {Z~3Qh6cnVE d[M姕$!Cnqヌs1hضDg[Wh mH2CТPhd[U=%0k.ARǵ^_U$$A!R_G**\u'lީ)<Р1ޢ>PɜCTf3NޤP6{#k:p*bkE@iCֶ m_8RtGl $v-Eؐ6R' {ϻ96'Bݝ(]VUO 1J'mߥL[w[+)59aX;Om7fIPƳ(?{OI#4Gf=4ϚyL9&ޏޟ|CZ}櫙yv> Sw97'e~3Qp^4>/ɋhVȞ[u8cYIXUt>uYSo~!iKk-p+7i5JhMzJJ > ͛nՙ-•,ͥl}#= cJmϾ%% d-dLF"J&Y$R.@dȅ4(:uNi_4*vy6e.yUu5+!=p,>k*xՄfJ3վ_?aGMz'>'FhS7UZ"D8B4C}^tFuVQ /md{ L@t].iL:ZoˉA\5C^.GѪ\EU9сKd!\Sw+ 1p{'a(:|Ǐ[T-{z[h& lаՁ{LMM~LDs#$k4j;Tb v,NN?|ޖx)C$ǟ}w Of;mJ͟~2V_0̐/ 6=pCt 뚑3,Zc!M.VQO aT Єۂdv!v֕Gt71m*~o/>[N v`qR=ק%w#&z(oGrC[Q `ў*dО{,<(̈́ u$TryݝQ x:42bBC($D"u(/Hh P_% *ZZBLt}j mTȝaIp5$/' Z/n.*gn ^av̝tD0V п7.*cPG( s68}X5vR1ѿ諉]P uOw0k#`)(:p 4,x382Q#R+b`db¬"Tc  Q#9'$M`%PkwX~5eYOMtyuAx{p]l{;KX|yx7hz^xT!^y/>w&Yh/K?oU_]s]Q|u/yKB(B mB3j(1Eîʿk4Cn(7<&ѕHLC+RAp=E`42h(3 "r\'Edh,ZUemA{ׁRk\Yn 0JEUξeueMt#+O)"pkyl߿ӮݲƂIn(X7P^Tf"[p'_+OO` UԁpAn;t#LG#f (,LCW֌MǪ+{B=ccp>lF,XV Iͨ ;R؋haWDQC@CN@Ţg>R$(fݦ4cbD$oP BXƸg&%K!Pae]I@3ֆdHH%)&DģzŀK" TиƢS\3*H/}TOMQ0fS !!O\DkdSx 6gn4(5h.!.R6c*Ai0X!SeL6$d=N8Sz]%q1p͈g aF4q=SV3x^W`mQpZ$e zoFSi T2H$:, '&( W8K:G5 t:)BVx$IVqAVZB?ٷDt6&  H4 yk;ʏoq}F{B5N %KoԻuԘ{ӜEN R|ì(3\X4N$AU3E<+Oq2'-1Irer(0b.';8G($JRcchJŌ[n퀷bbGJN]FIhW@iJ19.8p@{C28TL{ʭQI; $wݻh@#?"+XNo;{A9k{g1bm'4]˅p;s|Xg)`yMƷxp͏?`ToN޽͆.[8CWϰlA-#-|9lvb)QadIY;EJLdcKQؒwŖɑ{tmv\ ʁ+c-6ji=^=m$2@Pz]Wq9(aIvr7 `ɐ(@֦WlWAzDeڪx)jlff]ufn+9XNE×ݠԠ G??9fw?@<҇㰽}x~g_M9Kc洼qC %d= E|Ŝ3x"POZָiSBѬ1Oɒ<2Q{yחgѮ$W_u#?OIp'tv־|/J(#lz/"cW9>J>yfxV 6u?+╄)G`J(aWcN=8Lg^ژ mp#!ޝoO9::Ko=:Hr9/O@ǧўOqMt!\|!"V3L~eDW&~LT$y~9(|Kݦ(|j %>m ><+h{]qL8w2LZK_`.;n+Գ@#"PӍ)~{[ĔVZָ bJޑ1eDrĒy\ilif}4tXqCwd$/{Ò{ye,oJR[$-̜/EZ ͚fs7He=gm==hc\80G;>=i\4^9Lk:.5LDݱgݢk]$`r4ߔ{4%%Ի'@E4oYc$5ݙ[7$/ijY%H^x%ɥ4bK{^j$Y${l2wr1GqEn=[$%Lek}pWdoX!s Hf=In:FGMG~hLb>i2èġz-=`T"0\B&M8Lx]PQŕ{K Bnq* IkyeXp[So 34Swa}#-X֡uE/s U\6OO`pT){>=i,5PǡErGh>-ftB%r<}d\ [VɊwT&LxO6`Rǎ \Oo#(stH;{0>*Aj˙QaCJ䗎u̢Qb[)sF%|lшÉbYȍy7s毯?y׀t4n:(~RtR%"1Jw߽!Bؠ K^~2ȀOYƯG 9g_  jy0$Ji.%͚OzŵIN'%sAk$q -+il&wܯ& j/ELP,.6gfSֱW0-0eTR?pT}OF/ t"Fh2ZxkN&69G@q{)UɂQ:/hhʌ*E;3P'|gJ@ǤCH%DH ںfy!Q^2Ƃ6}[-jȩK STRYkեIYBILՉ΋6Hcat XTBvC55!h5.X" t (z D zB0XI!(G Dejih̗r3ϸQ*ewÝMI'FI3AhPẉ|%_{B5H(8 QDGݕZ % iAK *m P1{"ex|iP,h0d4 ^ÙfYIJnis[Iw9=*4)t(]N%eɅ SluliDv|oo9n}m`/".wivpYNQ7kh 4Qk7M9w8Œd{1 dMz>Ԃܒ?nm)hLJpu?W0@ə0-gEL7vbF^ w$$* Hs:TDT2,hr~KthBrvRwz(ö́ڟ^߸_c^ fц Ϯ5r.QǺ~q}̸>nf\73/@wHt 4NJh,AyAld`F^?3k4N.ځC7wum֧q: Ջiiлᄇoc+Q"䋷wY眧'4( Mkw9=ϫBfBTWC !O\DdJ;ڍ}FVKѩ]v1@fRckjDKEF(5v:ϩ*iP$:ktntWiZDEJXP+Ex;HD{Ʊ K Su#d9 5pD*]Yr.IȴDt}.81\ \klU"h9iUAY¬5d.- 6T"%LT8e K' ȜK:hi%ڬ]$0ri=` r'..I2%d=ɪuvNXTT Q+o-:6ݧZ9o@B^&ɔ~ĹnDָEʃ*:֭U`[2U[E4E$5vK탲b8lzQ ^]ezO'{|`Ň l'fL.lXOS閻pWk*UFhxK^ _W M#_CW E6ZJ aXw\dyfxC«aH aw?R-"v6BGQ3 .\ʰ\ HYn5Q4Bg- ^a%ԍDTZ.%>֊5O1e;ڭꊾ y"$SuޓY7uKAI}F^t1e`-Ъ֭ y"$S9`ĹnDTTgԱn-"SudBZ&$䅋hL]{U}~6^K,ZtūDhʃJ ϨUԢׄ+rb$䅋hLi^>feSJ$v܋3q,|D|C;Q.=YEH ύT`a*vc=QZ Fxo#wr_$&a$7 IkXe8Y( 'rp"6הa)%+ "זF lEW5`\7[5_My5^5/އ>;rh"6BPg|J"ډ z6paAp8o݌5wxЈ5Κ!RU}3E":,MjD PYgb&$ 0kT@)PK&SBX&i+ 5[c%!2%q}u%K_Á ޫ؅|\ǭE?O7]T_?SYnJ~'.$&{蠉J&JWon8:׿w1%L׸>_G/z/[;ƅ{c4!dlDɜ.w,g4_۸} <DP$;(;͆6 PS!3.^Ց]+(L8O,)-l `twKў OG JCFrܲ1(twaқ];?Z8/q) 珖. ZkEJ@D#JCI)$S1hե.L||Hk:%rND RΦ%qt?NALQ$>Ć(ʈx ւL2fό>:Gho4c=V֕ i'|E:tEa֧IV>בssѵu'Q|q= Z`F]̻VcBJ͖A_ۇ?7iQu]U `Unǻ~7T|D@Vb+.$I*NwEr& Ag$K!f* 0`/(אi.ԅlC/Fx%s3IOk1{.4+V2&\Ø+(ō@g:Hˊ'~Фm"2hypPk|&|qL3[kaEx+E++~WyHՒ"@ĞNx%zz`:^ O BR+~A(vgh˫"E_.0ʺ"}$ h*A1"N$kTh핣o ASK)_?&$OհLr#f+; d_{@T1M3 W\R ^,a囄ۮ hun*\Ay4 -rt c<qM 80=1F iő!rUhꃒBS(3 j/IL $1UƘR ^dhK mym6V *<ʁ捽daU^/!zڱ 8J0,QMp璏/VMc _N0$|E:HEi&bݓ+R[=C#A ̃s6:g2a"C \45P|L)$N0b2uze, ":l\-p'qQBIQr5wvv-19,1@ ʅRwk$=FP0'"?j޹ Nrow-+Xwdn&QSqDFt{ݒh#@gpЭGubd) 9 ,Aq2 EQEg{zY‡ z㛛.)7"Kn^b:+Fx5 +?jr' z~~]wo//!\@"ӽoU؛B{f6~^ Ol:>L"8G#*i8@*XSx!PlxSy%k7 )U*,v\ɶ3WڰBR/.:cj a<`k:37Ϝޛ?u'+ݟ>Sa~ ?A)P\}Xhj>E =xpxeHA;/D'5Y`tΈz!qs'Z83B TJr`X 5PT $ns\E@;r^i1{ l7`p9. a.`PPZѝl=/Ort @$5>ܣBP3\Q )DÁxKB,i86E.rV6as,\Jj@  w s>*E,d<g[i5Ǥ Ubbsa-* x ~6}$y(fzmt汁jR*厐R6 Tԝ:Ҝ$ҝޭ(G60aB0TlvfPs]5+aEP^vfP JT)J$d()߯4_Q6`tQA{7H )>#(]:MFHv `j+U}f*zj:5SjՔJXD5@Ҳh;@] JǸϗAtz;{?Y<;i,~\0F g_M yCsٛISL)bep,/.RTzܶsٸ%!Hɢ_XN:3y_͹:~.[3t>wzk fB 4d /EJ'? ϥ-Ϟ,+ j|5!8/s0x"|S,ci[FaqhU]x :J[ 59f Db%%r[ccL*C>TҘT[b02~8oXUJc>Q=a[GUz"/kVY|eٞe}g3JvM4[߿>l`n| EnhpnQð]v}t"}CfM}=&62Zr A;W,t-r/cйujuL/`nEImlfA:?_#C+AOX7{=R{>>)p6{kS9bscdȿW[8=hFއ5Ai}Xc0k~}x1ѨԾ0k}갓R2}a(8HfEAboG&Kך c*BK;<^#m#ixqwE%y?( iަ0UQJ.0YR(YHfñH;3;3&:!b %de,#m_XIE{[ԧ--:C9]?;O';ǫc0^ngXXG1l+.1P4_rݸηFa3.`*bTd1x>g(`MzX8) 9y59nylOlWTwtq\w%K]dHaSR O5΃jPޞj4H1'@od2.G]vy>=XqDdbP0"!E̊6&ciL 61d[VjtPjRݘ3zP0.'>'%>Lqo:C5ŅI!Sy̹NءM?w 4kzqRA5gs,$ęg;8J rMm)_-1|_u+ѭ v2V-g)ٮ+PuhD.>OcE3ޛ@![&;k/ǟFC6A?gvF9:Qg!גsu.r ^ Qt^p=>w4/nbsxTTd'ZgX`ĨmNOV|$+~K4;vOLD;bmD908"g_ڏ^AumU xu ?7'-p]e|=_l0&[e/6ƃQ6xPLM |nVe8w6 4 @fK$*ldzJRB%{RIѝvJJ*'H~$f8 M{vnb]f6YHud'lbtvOx~Tx&JhWpi*wKAl4y>{$q+J)<7Uj>oC* hFkoiy3H@(B{>:̔w~.Bq{Y.M[\J1G + ܚ0HW[RE8yUo?h.@{Zؔ gn='煸SWOMWEDgPQwpZ+$PC r'&Ȃ!.=A>, ALp$(ZsscX1#% )HJ t3Č&j8#@UQ1J @3h"XTY})F(&RTNaDvk.=RY5,d]79&HIJm^k"՚H"nkHN%҈$!4$8x#-U a IvFGfo.Ĝ16GN*%bt?S'Wٷ33i|>\]V-}>_>n.-ccfp6 7iq+Ns_+ty>Aπ\ogZFj7~,XA0ѐ/\Ek"gnsʠ-v:60h[톖ݚА/\Ek锦go=x1K G2(:u˨2jw2M=nhѭ UN)u+AG7VvJVS&ƚeTn[c<حs -А/\EkgbxtHVWeTnUFzGj7ք|*ZK~v%50ՕAթ[F툘>D-F -;5!_n(9NG`K40_  -׮Dp_ڕ@`hlov%(`~[ޯ] E߯O7dƋ,Op"r^.F˒O86(e3Qs}?L8N>M3ؠ[9'zIf_5պabkR|[_^~ ]ƕ7vfw;J+DB"=I!~~| QG?o~;GGUQd Jzw~xVkemhPee׿i Bĝ@G\uaӉz(wS;Xݼw&opmyci㻝rw/Ʒ>=\4 v0Mg&Av'?f'`g1Ykpqwey ?;δ0ޚa'k{wIdAm:d 1qv3\[ dy\Ar>TJP-D#Ufdpgdpd:- W5t,.¥Zug蠺/Nj>}f8MbH/M EXۍX aUn$(X0k_ԣ"UioڶfVHoڶmVxӛ'3`hU#00B4?4@K=hT#JPPE=hTD22yQm]mQ8niKhmT^bѶi3V`8>^{ {63`% $ m@@@yع @' C@h$ HFXv)@0Q cG9ԩ9k0WgS'Iߕϧ+s;xpxLhm3j075ğ:|sY`rܝcH6hrK -:#H ҳ3aoZbԘu)C# OπNsގA_|i{+-K@b<iTCQ*P0J|Zն`%@8yR ' C@Ɣ,obƦ;>k ~My4yPh귑}00g-R6*;}W#!-Z-*ALe&U!?}n~@k[Gc` q"7x:o&|hYM(v \M]r<Brv// {w!άٳqBa.:mJZY!{Q(A64qqvәvEC3MiHJQTWؑ.[Yئn3`AjFq8C "(&yj^Q6&OxdPDjiԴ rm>T R{B%(4U `į,NM+!Fx7%@ܳK*~z-q$H!qE1jdqBj,+8| 8UY@*xjQcTԪۖs%d:&NgFTta(u~ VCP >gYn.i *Yt8Ac7Oх!3(!XI"t0HPS0(ATL",T̅qHl;J$]KS=L]KVVcbEBudsn&ĵ Ќ T1 P!@q˒,\w_:>T󐈵7;ҹxif(HDbĥ<1!H D1Q1Id c#>_]ݫ'>=pjYiFi\ohX cPpjo5(ۯQ6g~U c Иs#庈 7*\v>MF3 lqXZ'!/O~R;{5~Ơx?k ގ`ϧ^?=sG޺:t$&K*)ߪ&^CMz x0@!P~A6OG^hߚ?0mB@e"^b^ \ޖ"(bYwfW.˼7p`Gha`Z=[IAhg!-VYiY/}Ң{ib{%^GGxW&1[BFoqfuKfHja8=?@`4H?/in 0 G?\]\~4K f^0nv\wM"0 5"D%s+P9_D=AS Z#;D2wʙ,ēY-ׄ՗a36i[M"!j-0+ IhcEV<6.,(|i-(=a^r#MY.F )Ao\hnZjcSmWNM.m)|x^SEτҲ?DF\4>s7g5Zs ryk@Ew.%\E+D?4Ҩ6v*}2\aק4Z?'sqg`3N|1f>^IGPp}țRǀHM<:&ڰ8g-ekB~?M`Kq5~]ҡW ]Fu7+e/^%;wcxNJ%SQZ@4[礇f?g? (݋߷fo?ףGUQd9qYoLvM+2|_\w4;.phUہfNl[6w8/w&oZ3qMSCgνb\݋d,4N^A ! 8Y??GƅbX?&+k]'$i/!c&jn3!ee+KVxm7E~FUB#24b̗6BU֍{ʹ+yy3dory27zZѷ5nEE2`>]}9s >ϗgAIbRLz`KB;hZZ3\2l|Ĺ)'ѽo{c4˖hўѲ$&(,L1|MƌVp7Zpx&aiV! 7 Mi\w*Lժ)^0ʩx/A@N듹ZHY2YųL1WOJ7_Vl3ӛ̍?'nP7:ʹuv<M0;Koc x ]e^&Pﭿ4.o|F5O-|Ov`98FdwO9Q1+,vpȖ O27MGKziXҾuyϜ02`Qk1)HkѱAA@IL?gɗMzEYS)=u$;_~Q'"R7K7D%>2&MǽjҖ +&ó e ͫQ6'dovbV5`OxcpFS㞹4WX3X{9@\M4l5hga3!\xM\]98˙qG'ӌ' s(/k)"B_+k+2[ݳ2 gt G#?xoj0αsG9Z_Z.'VAA4%7|fWp lp!Υ׸U߫?|-/.KC-+2$ NgZb+ A.j5U]M`e j*8T`}{J˘糽EGo#5u!!lo4z`!ENճzr[c0)#l%N3j=KKba\B &up:#N%z-ENI'ig |)9z"9tݐ&Hwj M HqhBJ#F%#èQOC)-_F?&`4"?j ф7_@<-ƶ?ίtIHКf&Jd9f\9mgThMT*Np X/\j`$N7l Hr%QmBeMZat]S:Cs] dF¸{Pث>rr) Gь.a ެ1I7maZ0ҜqIvO]BWxHeHcC:tc`WJk^ Q֓AZTW %: 5MVjMV]dL:YE BcBjZszH3qL|w#sX4Mz bL#vx9.76`C Y Ѽ1ī[^8-M|} %{l %kc&_#S)~)c\3Dz;EC3K*o[+/1\ʽ~=a` `A:ل|>Z5-Lj} c> ʣ큥vʨ*Hr>_ %w,Z/0gIjʍl8/M_LS)ǂ_ğA:\ m\i^sJ`(Vp|'Ș*x7vZc梑`L |[3(,y c䔁!_-{b H7=Bx1H-zĞeTh؏DU`[уˑ1.0R7Ǣg !(RSs-zt3wOE?hb `Y1j!z^qP9LW{cJh8k[uRVX*dϪkpsi[%{̮7A0BA_nFfiVSf^QUj?IFoP^K/!.%=Z~$̭7 gGq{89Г2{ (7t0"6:+Q'[lcB+^$GzjƌJ7-9\qyj{6u5c2E'x{/moUPpmv_. nzd6SmgrT-UNq\d`ĩ0=T!5Tp9]G "ŵ_~̓tJ2.ֿ6q%6.@JH2`Rr)W+-%ΰVp"YJrpsΧXvc `C958+6f #={{ RmmnN |l,?Va{AD?z3t}R `)[/+MWX]U$WH 'SWf;@A4+N ?pWȯh 240dJ !GRן'Pњa>.\,b#IcǙ{5L󑠜,n/R',|x+?f[WiJrypa:t@ Iuû^*)>fRar>ns$yzRQGG5L.* ' ŞHN #IMHS$Β 3?²?zZT!$~dVk2*`In߯.c M &pK(wTu|A& 5<C! OR$( "4CtIGve'I. <5/JS'D$ 8u]%DL +tl3*yQH{ص5!Kт(j8s59d2B^0#!AۢhfwC1W:_jR8^x{ibQ賂=bv;t{0Kx|I#?H!Jl[MYO4xY!Hb[n]W&S|,"Iw^'zɹx(䢓5т;/_R5dtRle/ Q JgIEP>RŠ@k$r?h:ǕJS2Pp!`AOgYLuu}mE7ezQT,*WlfXPG7<@nO7W wN =Ǒ<'I}q|tƂtMg@M1q5/ʝk}6Y#ܞIn´dzV%i?`bPK鮀 7%3J q1s`;\k] {O۸_v~ 7ekkDzfEv:ZͦG HD_xY‘WhOd[áֺm$P1( Ha 8 ?)4-wj3\ܫj\ Z\޾^o05^rljZ`C[l 3+3N dWLɑ>}Adx+p`6O^Nh5zz?)-pٴ {7'rU*j;`/i $4G{):=8I^j67GGG5O]0:c]vs4TԪZC@~G݉:P0RN(V=_L2E= ȯe7"b'eIÉ}f'1F9lHL8M[|6ǩD^4\!5]u*Zu\aN OY5\w_UlM͇wYfN~BabTT1cge7A 3y\GH2, !\ Ā Q]M2.z ET1?ݿk.Nk^ղ lCqlFs6oD%Xg?QuXt4*mYnrqR>7RERNwZZꄬJZzeoiWR8z&ãi a=1{_+ ;m b{?6z E ]xjFl\Cwf^+?ƠsQ}oKN2@}+]*>o|C9RJyePO'%|.XyI{'侶%7:(mKZEq']r-0κx++^7ﵯ6/Lv\2-sOMȽKZ @Q<`dd]3pmO]"kdϊEjR'1c\u^ef9J.9S[<+K].~sfXW"d%gWPA;Y]nN=8^I= ڛKq d&JLջVyUW-^Rq S=rr5? qG"l_^Fճ7.];\Pk&n;Q|[mI9TO4n19a!TY_&]vRhB%|~Q? Yj;/W`<;Z#)o1/>Т֓YIr8v䳹Vϲk+^E#.e*?NٝX>]|o٩/?{e2Cj1hfp怇Ɲã81 fDc@!Bo6;o}di>|zr@BP{Xe,~ks1b'C%k84|JJ1EXl ԃ.m!e)Fjҗ4FA8M |zM<3e] P{?%)X@$JD?7> 4{'"1PI iOӤ3(9@ OCN\ۤO!s}+ ynEi N8T`A mE<Ɇcvig{D0BY䏧ެS9~lU^{H+5XPz9QoP}GH̰'lwުgL_b 2_~c%w尕YEBeV٦ŘE8sWpDy/ ?c [Tjhu=RvWD_(BVuϨygX{Ep0CMI8`j@}J֭Oւ"6LWFX|l>~!=Es0m8>;ˤwIx^O*gp0T&S¥=0>2h뺧UF^~<9`$\Zd@ Ԭu!剃Ĥwhǘqj_<{.hsa;$3USZ9"Cs-`KC[;Ӓc¿\{9PrHLDEN:G5 \"QXs1 2O!W՗5@ab$I4Nh Ou  ϕCMcɭvuxޓ" @B1#+B+S1wE/B8:1Y8>Tf0EvZڞ F (͡TRНgx=DՑaRxih7ؕOI]{oAZ b LI< [m&^A|4}"_P= Hص ~U`2҆]3~o a|4aְ"P2CΓӺ: n)!ǻMO}֐-;8V)^w752SEoͭU7G!uQLd$sm8 # H*XS(9lLP E>M"LMթ @BbHD;N{SS#- 9_FWu:XQ֏^ kfaDDOi䥥JKŶBBW´tH&㣳¹X}17.ռHuhSA_U8~L*BV &,d*Âdz޻]O?L~ GI'p@=yԺWst~?t@ǢW< OkԛouW !WǰÁJAUHͶӓJH?$p^?љ^/y ?'IuƧUO*(iҤw(QVID}3ȭ2;"GGHLTO2 NW!n yoa?mGy.=ؙ.ܽftڎoR%>ZW)ă/`5,L[qL4N׏ aVK ܉V4@hjuP6 .P3 & 0"9MbҐ*G bV`'\JT$ez7EQHi0AٞaJAgi$4 ؤ:z,Ξӛ~s89%4qB\Vs7wP3DqĆk{Ȣ.+Z śYsO_F+ig((IfLb4CB ˞FAyy2H4;yjzC'fnZyoׂ-LϤ`&N]ļV:uFBz Q2AkV% hc/poIrCSג<`\̟ܗHret/y.]w?| R.)&x]>зHՆ|k즯-9  o(?kxoaMQfpo <1u %܁hڬIp%GHF/a?\K2  Nڼ[0Evz;RQFZpcFlEFvP7m y qmj,> &Pw&g>>gHLhGhTʤ <5pn+nHⶣv{:r[B|jU~`~[r5i%GI@ A< SRkUww"4ġ DJ- p,HbF:`ƈI2jտٝ Ɯ@%i߭}_jk*@jKd V"_jQbY!a@AHPTwK@e∅::"H$D)ADd$2NZ44TJ32Dc颰yή(zʡJjoU;#-q)6]/=k2ZG%qוhgs&9_^޺{uA'mY˯?kB#1"tDy]nɳֶ._ݰ`+Ap "ElFV|V+=7|xsnm~gX-3X-<1 v/v?9wq+QegrPs4)KQܓ{*iUI\H@+m6Y*\r;kYI'0b2*Q p0H<pN P1 3A#Eae\@a`5AwѰ5OO܏аL֞hx+؞wN4sCL$UD4a:FiacZ? v#i*ɫhnQ/VljޓN\4'~E*:.Mzo}x^cmNVX- |aԌ/完f& ȅ=(ь=L?ؼ͚59J19 B֤U2,Ǡ(?esA0Vz/+ ~f}$PCzuV8 y6f>ffօ_֩ݽih ~D̑ ^HBRR F2B iy]1u-U&CZ~DC8 Fb-s'68Db'0^DΟ;}lDCi, A~[ BvGM30anvɵML&R )6h#'M) ;[S ℩բxGIs D@H$I0柕֮vP^zN("pKDJ $",XV$!9%Uحߪ,?m+~){?4IdZN8q𡐔l7wD/@FX& n݉[ơ*x始ӈ٩mľ' i 'V^x/_IYgGcgBw/Ib* '>*qn=c|ҢX=J/*VP݅;{\=`ԋ* qG)P5q<}vUz׺uƀl>!PŒ֖2 ɉ8n:tEoa@AI]ջu$8z/B+NpQ/X< qL'Y`+fsU,jliq$8jޝmlI&tis7&-gHarJ8ә&/1)B=LZ6kJhs>]1L ॺ-cHFHOx\?Uޓ;Ӝ'}ஹ@,/Ħ*z;$2=ٸ}OER&kFL&#Tz lP=_p:Bk RAO@JԒ;)(H2'QzعyFz)PȤ ,YP"KbP5bnY 0]^}k%<=̰!'nY.4mzq~^6%5Ah spj/) VfEPv^[BπvK"uf/A&D ީ^q=*̨ku<³.*Aoc;d쬖V?Ar#WJge=wshNN cuk -]JFC-vlXwˮnp ̸hxON1 !NX̡;?YϣvaIrRU)kGK yVUڕb߭\HQ%8J[*&:Jq GEE1,G[fuqcX-);涸 [nkG2ҵW& R;Va!n/U9jWi҈J [ , ,1FDFzI?1LM3%Ҹv,!0i~nYzޮ ZZ~UZd1L `ՀGkZXb:Z SL)Z;ZHwu')@*m"{8bYDV΢٢9bm?Ugg(}F ='5&DU&~+zLsQ(Tb,[e^9xjDɽ wV.y;_k52)co^_Љ[TU7צ>(ֹ2Sʓ\%zIZr`*.j!J/PK_+/VH؛L󲌫b 9F6sCЅy<Ջм Yؼ]l\J҈K,ިNӟA&oY~:dY^YGhbv9T6bp5cr 9oo H[cL0n4kew(ҁ1$~4 Q-Iѓ-Iyu|<.LPQ@ֺc -@[w(@qQoԭ<d7`!BVfN>>2}+Fs77 W+ވ% RP̾BD! %r'1\=T~Z*9w>̤]T`So#s^<,]x!X|?0<χ>7<\u]])kJڽD0:\/k'({E aAH8E$&bAR\rcDpɘ49_c,ت[hIiyi0Q0f"T%!ӷI !T4\ BAa=;ќcmgUr` jjAz9.ziH.16潍JBHRȓ(EGͭbƼМ ^$qATGb(_x@ cD@a 4\"P^( ej:^m۩ZQ>>I^,LIn1 L56|&4QɯUndҷLoǹDܹtsp<ŕfP[I`+ G>xp6_h4_O<.ҟwF|*`iwLp \GmRI$?[>sq J"5q"YUU)=_{eӑ)8;ZP-ٸk P (EaFv!x 9*:3u?/f__Ilj6ŌNl JCL:1ԨSwAPV-i0x0JmkC6?"$(P/%c*{xn`f1m1ΐv[bgx _&=x4aޞ1N=t;Chf=>ANBJx|Fg8> _Qm@Ta~٧|a0pfDab4w>o}X64EZ1Z7?Mifq)ПL_x8l-ږ)OU\. O'ar}7=ڂPS9jUJO7*W aTu+UeKP kD[uaMj*ԇ`D@ {8t[pz-D֮%# 5NԔ^뭀0i8}xAĊ5C@ԇEB= 3Gљi27,d&DAc EBRJ!Pa``DP~_ܓr?% DB0f,K1,f 1T IsrmD & +'+T_tSO=S!4PVNv%"t hDv7!qFҀVrHT"Bw]$)&n)L Hop(׿='o'ko68 {b:̅|ΞJ\}; lM^N:͈~Ae($pp#"34.(AGV60w]%jXFa.2h%=bW!׌@NP,oc<@k<{dw˅lxPle:>穥qμ'́yoGIB4 FZ Q98-h@@: [nkǀP`"ƽWi+n%ڡM#Z@؋E6Ket;#O 0J`){c _x<ܗF*cWYԗjKPӨu=lpZIRqLu]5$S:aͳcd37]S5OV|EcJkQ.gv5s[Sڼ^ ^ǙNӟϦdPYZ*Ϥ'%cݫE(# EL OpӮ|w/m , gD:`xctl(A<#RJZBǣ$ЗcGT3dǻ|q`i vdZhgfY;P@xᨋnaBXb[σ|x?k;[TBt d6+hRDPm6O 8Ej&H{ř ͻ2n\Oќm>ޘ_ҵ2dGH-_jj Gj~,wF4i+Js~h6 UJ"yI8aLD+"<ǑL _ 5)&ލ9,\Tw /?V8tjD/'ydrxs-ą\mmzƳSXI GHɅ BHQ FD y57OH6cٿi%(ӽzyl"Rv:v]WȶdS)i(:N{4JyzAK*x੓k$b-kh!(G,X:&TK+$0q;F]曚hT,P",<$cڦ+ewpݰiY6=ڭ}?0y)jvni&s/ V }a׭3j/,|fa+.͔$a̳㰙VaAvS= L"lQ(*DsXO!j %PVL |b#|Zs#L2a))ŚVOqFZKL9 e Bi%;S)$!+#{D~0k3V R 3SdxNU$*#*Cжmm L #]L~H7Pq !:QcB\Jaؖ8ض018Y@<`ʛR9T^2[|v$\N^QT?? )_>BrV|Oz-Ȓd eLdÿeaG*3%oD<':ocԐU0V:_V27{ ûM~]nZmϳn\Bc<ϳ%f&< ,c;ص{riazy4hlW?ms^;i"܅nrbgfӶfqâ[ӆcHɤѥz=nA'kc椰 ɤ0z=!BǃbӾTrkpDP`:BBeeM~!e~OCjO9(bbX ːRko +#In4R}!-:jjuCDʖ pΡ*g蘔gj<8Ia{4/Ó?|[2i6_Ue_ayM1͜49Wm՗Z|0߾WrI.59*Ӓy4.͟ix_BrZ=|D /VwܯJp73W+p[b~:V=-mz],. -Ο{_4_8 mhJ'5"Hfr1ly#/o7|_m4灙>r\Ħ~YT$w@6GJ4|K˜ic+Hke%iŷ(uƂ&3UY+&@GMS4bC c>W\ 6Bkl`CQL.=׾7a1k(۾|ێ^՛8;OWn\pFHXڔ΃c'%M Dž B0DQrZVIwsB wt9_ BRwQ D25#cEYQppX{Y.D0D\\Ҝx*tVաqHnep!~ˀ`g1.h%| zŽn` 8-3uesն=A^Leyf,Y!)O:X7eDkax]!B)(' 00 a))vïn_.oY12V ߃09OcJjw3i-;%2!#188e`5V+fzSèVyb<,qq]~6"|s H[ X+<7,XPDp"[ʬpT/ְ""e#y&78ۺ~~UjqDn8'X\׹F0y.ms0v#!rFpԁyx7 { 0$iC9xOS+M^p*١5;1/>ǯ/˽M Tc^g?>\Y =jL"[寳K*|}0s&{{_V QC9/5训apW;G 3l+5V4?PWwb0&ކn̓x^J+zǭ[al[)Hۥ}=J ZgQx]-c]tsd>;n`)l6]׀F Jv|6O@'cu"k8a4MAmmuB= <4Nw󷽛˞Ulixl/Pmz-л1zyIxT\/ S>{ӁQE>ol jzaʀ96ֈhj. ]> m,1_ل_]c|.bjlw{ü%+׿S׀AÌ̡>kh,ִaDGJN-6!SAu*gŷ?jew)Zta =ʘ2cLqgN) p10Q|yQsrt) 'ӗ E2k7|e^$G6M b)Y"YѸ?V>/!A4q1{cJ-ABE2"DoDJcEdḵxK$Hpڳ)kUnj.9aଡ଼0Kd9R!KK䅢so( H6D߇bQ—V$=2yZBWKHN9d:t)0äKdB گj-l.8i xf'F\ψg7_݅/BϠS\w@,}D* 】^p;% 0,⌊~uBAq(ƭB˙ #}*h&d,i@^L ]vOW-)*ݳVJX4FPrWIWBqOz١P^M!ѓEBŤbj7fԗ=r|^Q?N.5YXVM!rjq $qT\1f'o 2d)TSV U)"Ny"B|O]υJn&AT$L gIJ4-$@taQhCD,SBB8 %"a>8@}_CnJ`W2SfG"J1MMabMI-SS,2v/H9p&]TgSX{2h)OG^pMMxAg"R;|@o:C=@(}ڔfz~*Ro& T~F)$ =&(e}eľ֮eۘդ%{pm,yxAa(F@Dq`:[ӭ9dpR1N@bσN= JG&aڰZ 3mԎA."C>{[SOTQo1t3ZDM?aKD"<0 4 (2Z~0%}P7 D"|粇=in $\+s5^-SL*iAۄryDqx~p|ҡ3w8{Pn63?oP7ߙ- H']ۜYbc)jS A!GBp9ϖy\_^bp>^>[-h5iFue)oo TeX; ?/wS[ԹZ*;h%kڊQaW`_&!ty 廏+($9]bXKFe-fL3fm^0밣0Xn,{lg@#Ae!^q 9+{ 2p]koG+~݌"k)8U b, rER҈7#Cc6Sէ{<8'ϐHG'Lᑑ&ՙ%aiTX:Z8Pè!B j!aB8mk"=f"pRGX$ST([ \a Q愧 aDJ*`x%X2JhQa:$X1#!,#50 '8hH2/#3 DBg8DH U! DPQ) !ސ*pW<9B KNxP'zlM1~ϒ_9!Sp q)ڜQ3a`~Wp']՞c7fa׎lOe,ڞ_6+͓weS< T~|8O~> 8}rD2D=_]<}8&,[)騜+9R1ESZvEkgV.w;ڼw;gOػo$۹Bc;{~^.={#oK{^;]H(4YM *U0 i̼3^xŽ&?5;un隖`yT`ypyq1AXutˋU1AXau˃ŊŪ'/;ݮvʋ8b%b ˋˋkt~bM뽯T`\5b/ 23^Rn$-`Q8rs ͔;\^99X^QhquZnV[GQ?xqPXNNy1](/fC_-.]6󂞒XBmpJH(9<(gC'$F;8.?ՠk{a FY8D(AQq4`(5p0= ,6u6Wu=xk"? ޲ee@tIB5_鉧a:NΟN=y1bat} ީlP8r7ҳAe3s؇>멙]$qĘX4T{Rv\_ehѽhbY/lyرm(V% 75F|f:di~{領MSzTvvgW>WIq?O3\ۣ,4d/mlk>ً &6ur潆>%yV65ׄl~^Ȑn{^+LІ7&Tnw^N{͐z•ŃU1]0MX)^b2MѡqdU:aཛ,.}1SX#wflPI+IqG'rX< UYĞΜ:ŤAIҞ>_KEn'R*gKhwcw˔A#d,YtdY[-;CԻhnw˕A]hҩ [nIz6r”";_ȻI$w"2͕A%6U谮ջoUh!LakT_׵j tӝ]@} t_ݪ%@gm1ZqZ4벶RaʾMKPX};z:V-ogm@Eg5[}s0ľVMN{`J}; kAZ6-AMO}n@-ZB_7puQgi>|9f-1ݻ3F>[{cƈ4y{cs%xr/Ssۻ3T9>ܪ%(.nj1F196-c.nj1G }]Kdr+scnd{cd1B0/nj 19V-m:'tcD*sP_ܛ3R9>ܦ%ЍeZHcsmX{cZs̭ZCl1c19V-nZme scn'ucf>[嘡isPc*scn.s\>ܮ%h)/,(s}UK1 81@v}(ps~3\\Ye"e0 =b2^ՅwU(MΖQ3a["cwBQ +1K AG \`yT lP*h"+M[8b`>1H(ѻ`!X#Ñ P3E9RYo1V:bKSp'LI2E0:"csbJF(JPN(JUV9OqCbD"{+B!7]1rgT E P1`䈁`|8x/HE)B (lIP1ŅE k` ‘=E@iC#$/ (q=<0`YZOv;-5$BPiA_`:c<!+#Υw+(19S|cX5z:<tbKZx0 hf*B>l ЌC<0a1->mᶍjfq}~ia>ϧ}FiUM\.bjx\oXU]S8Co?%'j̪[᥯~]@`U"\?Ż_8+r4~~58PRRE0󋴺G沘Ւ8 `1 q1KkcqXډt#dk;b뿯w xdSd ȉeTƒ[x@+j+}ߙ'a/A3m.H0#mxV;B68F3BB)9 Dm̻3z}kó?N՗+aiJ3VE10wL٠w#=TlP9+bU+fvfD&W)/0bAЬHa*Xp'*/_*5%fޖ XtɌq~y,>߼k #^A5nݳe@AzMNTM8tvv`:K:/a 3LǺh5 9g\i"ňIy61E"p.uN ϵGzf0|;MX.ǰKׅ \gGf~0炿__&SւBt 1d:ʑ`THtQJ;CuKo?g sueC0&̌vA3[Q]X:[d 5cHLT/} $IzB_ oO>o\J‡Jvc5;ϔ2>os0e I&S$_) LҎ@ :# b<)^MbJ@10wZYiIf䗻#,M ppeWZ|-*|W݆OʌUcV%7KV~w=VhZwʃ?/}þ.Q~8;vgO! uC}ES/j unɀ#9B ؏տFEP}G#ŽkSMvU_*Ry*P,?hv .f{>E oo}ER*^JUERr X@D^^ز.)c "B'kAjbleKYo&HSO{Mu)M66ˈ8&%v&Q^ No^]lzUzYoȔdTYYn wL{nAHs bϔsc,| m3C CՂ@?iV)s" S#[Y;KD*`ൕzWhT EʦE&@iZZح]Sׅ-#U:Yz4LF<+9/HUki* &|W޲dU&<7֔rE+._ېejթvm _|[wE*^;|w% N-0tUOgg;^Ϳ1Г uNgDR#&>vldT(rb8]2;X>;^0֕& ^MJq^^Ŵ筽_ +t4AdVSefAǎV| nPF!t qqxHۋ39yVqAuގe^pPBU,g |ǒ~džk-GCÁ0BhwYkOJ͢{C#0]J|ɸ_Pi-sȿ6Ņ$R/Gda]U-~\]q7NW*L[R-n3[IYוzKɈf^q 0.w0lio=h4b(ݿCס]z3CJ u/Z&ZG]Z@] *6c]{>'_FBa),C}}pC% x:pG;-bdrzad:܀Ǟ2beȨ9,fYO}R{?ɜv'ÞֽU³7wsLj)փ9{x7;f?攒Rdfog^숖i^r RWJn2=NyqcS{\cWJBϢ׼ EGathhŀDkt9%"@0BɭشFuv` gm\F _u5c#p[S`O6ցVʸ ø1'_S?ۇ*89{IRK38Nfp8q Va0! G O \X.ħo\)u-lY&A S =(CQS) SΟ 4[-tf|; ۷/,t Ќ vvӢD͉{j9F8"h}28_69*!H*f!R̽7m.֤P. ,%M!#h|L1k#y01^+"(+ u5FJ0J.uQc,(AlB^[nkSV>'YkȔ GyY\ous~8\H#ӁsUUE(W8FqhL=kiTS2f8ATf?HKK.,&$g{ |j)ϐ~]uF Cm͐[%#exqQ7.?0 !*s@e>YʾOWe}BJ~OogQ|kA d@{C$P.B2v^̣.D7R^QVA*mPFsO!x(v[(!j-*d4HA"XD%BwXKJXISO ɣQun 嘶yb~Գӗյ)ήRp9ٛ% p{?6"$촣l6oGkD,YwkB&nj-BC1WM}ci0e]{XL$^-x>5? TTq7 T1b+s9-{O1NfH5Dvi3_į~zⴉP isZ+A>6Uu 0\wKA5вH2$k2)ϿFS5i3:J0]*S axrJXSڟ-Kz5gI]:GcL/BK9Я5Xr.jǭ)KьS'2 r: ϢqznClY)6X&p5`gky9,ǣV{}\ڋy#^"n[V@7>E2;x[KW 0N3ʆGh %]^J–6wetUwaBy"f?Mx|Xo -ʇe]mtꣁ}fG PP#K*.f$(f`ܑVn;YHlʮ!&R >>h-R'0Vs9H7tAV{uĔp/Ӌ|KbD[rFٙGZSPOT.=~h [Mzj<nyHMqӐjWg#a|ʝ;gW} )M t"%ndVn87˯J8c.%frqyO'mG=O.'i'b<\%Mb8p(ʤld,*#d* )ql@r:h^`&oeٵ2].",Hn;Y2ݝn+S-N>R++3.;ZaD' hMijOA֪}mU zn :zf7kMךD?FjEe,~g77b_ }ںc4gG.ބtg[mA73=s ߞw'^difmB|K}bd]Ƕ텘sf9&6c8ArRBj(mo9ʺE-fVr&̀j0Ev{Wa6WBwj+ ZhjqTƚ5eC5wi}e)?o"~wX KU/nB,BJ 1dm>vӪ xO D׫.?|B02m T?lJW ^h&@N % 5Rb{i#Ri9inNS; x1xI J-V-0M'>|KdƝqI*'U4Err>4Y3vJJ A鯏NJM :͸hޕ?'ߟϭd.^/شՐNxר \}wNB,GrKeSJyT裸Q\>Ƃ]WmG= [PP!k -kw8oYΛqk.WF #FG4>"^Km .Gύs CPǏ`CKAm+hfJ@SJ ֐X(n*l#I!4c i`<\|hpI-Z'<_ș+b!{i<Ą2]l,ua$Js*z@9hr"!%2 +a -] #*9.'BT|6" ( N0!3H^k`;p[ ˧[*Jw8?NեM*87tB,~=XvdtIɼw{|K);X>~ _-ȒV! \z4"/N6k*K4'hёYHmﰑ 4#8IYEᩝ]49>^$`֛#Tْ3_ΰ4m͡ q2y6 042bP]Iٝǵ!vh-YBG(*7w^¯: M˴B'JZCP)s" &=R[nk9(TkBIqx+te6l K7&֫/Ջ5+T9s "pk $8Rt߼4HT%c8vy/HǿxjR5㍡0TI.B R ֡K"G4 @D:,GS`<RFA8 MúuZc te3сNy)Sۭ)G=Rm[dB}m̐nAAfRt*טL`XAlmKKo] ػ88Mɾ*ɑ%8[=шz8c.]]¹UBY+44]&+g +' m+?cF" !1鄌IAEn"s&MKZL2O-iL8S V)VRt\ @ƾ$ÝdAƂLMdkP&W^geÜB"5i\hcqO)(g} l)?H5vA̪ B,TX0n?[vݚR!]G3]e6HgnĊ˳.n>mq1*c*̙TNʮ⸶v.jC3o߾]|uM]-s\7E2?f-:--_.>{C1E7wW=M֜46YX.,B> $m`i\󭻮*7z: |zvoOyHn0JfQ^Z{tȸYTZsjdτ yʳن=>?vaV$)DORؽtM cՖu#֨)/΁Qr*2WyRpIΓ6w3642E.#\lEȊ;⬐hCo:YGUJՙC7y?tP9<Л;5]WTu5*c̗Go87`qQRJR%P\+*qǓ`gGøa#d:sC0Rļy& ZilY#ؕ$ @.$u3& b=J>ȹ`lJٽKЃGsZwo/ƺ.S.yLD,E3009Z+B!O[`el:ǤF S :Ч?}$@tS6_U(?;۽@7ju׬wGdҋ|G?cD2Px(C7O+%糫{KHoZt^Yx2Xk-?2lrs:wDQM%?Db͓R=JRJ[%'M, 'JV̥'e¸EG+Tod3˘}V+3ũ\s"dt!`x?-^~^>ަr\j۫uOR5_./tO/￳+ohǟGfPIQ3K sF-vuWbZ/\]6;KEC6Di?yrSV}/>촛@0/c!}-ݝz۠S>X%ٗ:1'"g&{~e@z:Y*8̈́"Bjq}CxqAbt{XUJARb:ũh (#Fg$k~ ; J;`#g$램kie%_2r\1͹x .z,It wRn$r_221dɸB:t.Y&&1ϴD-C2'O˼)e7 #ee(YfTܬAxbC9#e0< _Vx*#\α'X-":A1,$1z #5ec֢e~zKaT[>M쎵zhE2^$:4de<{0I_$[̻FPiuQ[P^U>=ʧ/o|ZTϕ:$Pm/:זߞ돱~5ֶekWDF689]pR+vpmjQ nw^Z93*rU@G+8 ^ms[/H?_^?ܲuoaYk#tZ*D Rră֐?}]T%@t7i y@AIn8  .>_˛ IRg>S g-O.Lz~lykvt 1׋J{owTQv][&,~#̷E.죥4sdTFDÅ ƪs;@N_H6. ] &'\ 3rA_\<aźjf3K3.?)\~RY.y-Z ~0< L6:|,(=ճTn4hͷfsifְ|XSd4ݔŹ%_R",h,!2KFtRF' \9_bM;x|l%TvΘV6 ٿ Jچ׺ eX~lxzhR\ +p⻪ i4\tg[%/5^S\2A hx}4 n׭\nzTdQ,2t&M%hfΔ`Z<2/ܣgg6묒 6 @vQBk}~~t>GA|cT35T rfɀea˜5#{|m73B{QFX0azӜa(W2zz`>Up9MD5v͠e}?X^pFiRq;3*xt͙f% 1Y!J2"Y#}h|dfgtԠkJ#O^p I2f-Fϳ:qe9eNYe"wT#_B,mDSx]g9'D; Pt|mdALrAd 1--W, &@%;9̦ I^,G( $M/{xKԮY`tD؜qE4z㸚s1t69謤e~% ۃ i}(ƍ4thО7:tqѳa~"r﹋ن=x.rTUUȶTTz 9JߟYrgHV~PV).')IYh@V)f( d7A)Hp22+ΠҚY%!N„CԞ]oнBM辡QfvCo8 )>^ܶn/Bk41JLK3]j[\R"jgʉ8`KlrF Dg+\3;{cDZt7ڢhȺ"+KF~*:o@6 WJk=)hGkx“D)7*z&Y im9wCvrP YD]dqZ\g@2u3J?{Wm /x( psr]'vY 3hKLRgI68$ g!]9׍ހne0 ` 3qe8C}}5Dn PI`޾9e&E3(kH`]sCF\_rS'Vzú#r=s.Y=SgE.TXƻ 'Vl#<*Dg+(Z׸pW8qۭ[dԸG%VJ߸ՐCU7Tq44s }>K`jA0IlRq"EJUYcUn '?>UJ[$E |94WŸG j*i(_pLwt8~OqmUe|V^0 ӛ; ӇyA6W"IRWQ.Z"xU[y*ΥnM4ĆP B6h Py>".#nQRp `Iu{D n\z, r+"&BǼCss|:@`b](2 ˌJ1Đ[M9${Y [s1/(vo֏ ̡-ujkܛ+$1\Dj+phh]Z'Ova/cpN1WchY/]I $b2K`LIUy;~6I|&)틽͘U^<ܳ[jLY9ApS-";3?MfD \x3]r.\5N9Np}.(|R6" ɓE/Mꓒ8fpM`ݙ^=@sVV;-'(99Kz1Gc; )Z{0H_ NuK Mk翆7Nqt@|}6ix5p`?ÆBpZh >T348O^ @c‹_Bc P*L꺢`lWIUs{./=|:5)<-xCGNVZG?r9^%0`"3Bo#|dz!/oP0(2bO.}x%Woqu/8{T-.TX?E_'3H~jJ y+D}H 9_ $0@E]d܄?µC*4>_@ $,(dF!0 YQ1!ñ)Xj9MŠ ,BXA2Fx]ʉBbYa>?ot ď'v풹^ {괧Y!{K\6X7ٛߺK?緽͛*8lISئ G̓=o`Rs[3:tƮ1vQ NO6CB(R%Ƃ|D S(w Dj ۲-; ۲-;*nU lySAZ!Hd0)P$IC`K jU V`!HB5!(w7~V)x?oQG_=#AOއ{a~ñvh5Z83AR@ 99();e#Ŭxv1Bbb+`D}kl$rD@B%ށ"P4օ$]ƅӂHF9CVόRej !xJ#4Nl DWhBc#9On &Ad0U!$?OҹZM9=ګ7̏a1|ݼ/@V`Vb"/> "s(ïAk1 K+5"H&rD'<`%yd;G^=' ^H^e:ΐD+1,R_τ'LDK$딐-uR] ^Et9#k]΀ r2KIK^ 4r8 gm&ٗ|jјŒf0Ӑ>o"{R `o @їoi D\ >4?^ ""A8)z˻>)|)|?<݁} 8" zwI@2ΝͿOƳW%lFA Z2ǧF = mHةN+Ih1S"k&I)4'Μf6bXK㑔<!smO8u! f̈q:KN,X P(A"X!B!vDZL˸؛H'rޒTQc3.TFT&riM5<3ݏM;:(r3c?g`HoۅXHnѳ;͞<,iN͆2:e 5e !aSy΅d! ~d[2a0>d2J#;,!f5JJv,tjHeusw] `J(~ܨE(.?TEvtTVȎfTQ\6-&pXKiN_.;]Zd |u٩݋gsc։aQw3Meά;oJ0/pc*a'$gY T&,`@n6F[)rR 3rT;Ffg)b䌥JC qc\q&M}K3VcNdYU+G&hbΑaS J H!}P[I2dJ`U"I%)()pYS%%L$#ARTl)9_j$+!n2lrE>q#6Xc6 Fp1$F(\&%KfXITN+Hd)р_+HglwI@gqJګ@`~qT7Mݡ܄ !e[S"z[p`J(#nwHVaWnnuiRߜE-8 X}I- d0?R6u>sAO'ϖÏycgt/rXx %W"z`7C~M!CC ~?"5B4 ߱!ACPr{ako?qçI>xxJ™2t) =QAxGax+h' 3-Knui8P'Lz4M1{Z*5I1D>: J0EZTXq|,~N84/`o) ]sxpBҵoH#'h(Tץ!ï4yFVTP2@sn+{wEŒHXӖAI@$X՚oUEU+S/>T"ʃW)RіZub;'H6&]-"MQbw#n>;uvU¢\A4/({$b\+QSIv4* 5X9N~W"[mu Nhhw15U #D l"[!KUFLRz;`LM"2qOɞw<(H F o4*ր  NwzcJs}p_whe 5LmCZ-4e7}χf@ԕ{K4EcXκ)BK3V@ndYX0sF ro8{As#d1pF653R@#Н3nCv %JDlRҕGXѹ Vq< #--]1HmOؘhlF*EتiJ׳t ۝cj*5x:†pɃC;=ιkZq@]h<Ѓ~w#7nc 7_[$:7ƭSDWB{;S+ eS9cBY/C\Ɠ Axv1D@J,:?<Ə۫Al6D8 uY>N>vyqoBn0=ciL *{3?6S(?o-Ө+^-OAiVǁ̘e2S6q;GO0}J%(su M -$jmWoTNz;İ i&Q+_0o-BH艪w%eܵO#Ԙ#r!|xډ4,4i,Ba U8,k6:{ H~]4l}B _Hg<+~P!Lf& Y֎=}1]lt|X?* }»0Kte-&W; u4{n({FK_?L>WB9AQE?pʃ( f^(ڛS&TS:hZ$wf24frx9%-U dziz!e2LJg霰s϶_%%~/MOnwpĐ՜*!M'Kٞw|Y(!OyT !m h'_Bf4! g9̗HH*"^'?7,Q'T(8IU^p r r rʥkY1wOMPA3BVy k(F2uzr5UnY d+ )F< 木 g##)rz)ϖisɸ N$^ !A)m]ߡ8ƀ}S;r JTe6efb'}R̳iV} К%A~S.eɯ Yvoփ49>OyMzx_ v]p8۵ :>Arrɉ{-/נhqKzH*ZO> f_ߺZƊnoFBռU*Gox?ӕ8ݰ8SZ#}8]AޢR0ȭV< #ŔRB9bhz&-WbrARӕ=-/wt& Qu7vy^k lOVkN]Y*)Q"}OECTBw.j So)Ʌ~"d^3QWP*.J_粏 &DL 5Q]-t5m0` nAuTB)"\4 ~!Hr$T +K-^cgұ*BQ@hӗB ^)$ 4ۼZJE& 3D0<ʃ;9"c[kD>Z A;tӯ_'{ʸ)7Y8o7GoN,5U_>U,0EZR3N<(c`2^Ӝ2*hu\AePJ#٬Uc#i0mIB\1W "%%8gE+|Zbs8aX$u ⨳N *C\j.62BZc7Y$LHU;M6Zb'\8(S3rκ9Y*Egv~ w_sƘ9 wO z 3̶]<*z GѲlEuyjCnN L˧;T5:[dB*Mu%Yb:Mj&L+{/𦉫}BgrUVB }Wִu5UBQzRi`hl=f鹿ޜd.2kи@2޴P)JHbY3+'Ŀw"E!x޼Ҫq蹸_!N2WC,$`Eg_*/T2bN͛ӗT |̜U鎃{D(˘$ѫLg #(Nf( ,a]ERM|sXK8tUżIs19XM6|,0nU4Ml[dAnG rB g׊9Nsɂ M=XY Z _Nwc܋o(*4,U&4rmhԻ>,df!5uMykyИ9ſ-~+: vO~I~MW/@Yo]OAPC T/-ILΙ(MS½ ~!Z-`JS$UI_>=/n6'&Ȑ@j5l-!U[3Ýkb5]xuI=5\Aן48nJ`&#s04R{O&Z<8Ow92 DO%Se Q(zaaQp2} 84_/ۺY@[}Nb#p9H̀<$ Q{d{{$:a5 ޷#l}G EÊ}p+D:,G~4g nv~}D=xeuo&8bİ>1G|zK=r,R|MGQʧ({HM|]m4X3Dtw9׬#/S @7(J&#J\$шw!yuͻB8L+;jс뮅ŷ yl9OգQbCъf/ڊUwPбB˖}[>SP{_~=ϟ|w{ΌEyi{~1Ʊ~dLE*T'q|ŗEbR@,tLGC{=⿞suU]a$dI BsARBiޚ\i<\=X3*i6n-7u%a/Z)㋟3 z'~_LsF+dZ:/^6h# ƕ/&*YN5zfܘ#S=^n DDDTٮ!$!)k0G[P' YsJa5ZP=[5܁9cD+>Og W)`n1sj}JO/*ռۢXJMGR5X  bS5;_X|+gA`0EM1q waAml$e ] tWe ՅBP+Ydžͅ2GPFXKK%k=(I)RJmSn3tZYщ $3^'}ʖN!=keT&2"(x ښ `ǿg$ 8UˋD E_Әq\p#G"QU',SS,HxncksV"L*' O>4}`y,?]UP1TPh\Di(,* t{rp'_"r6FY\[ l`kUyn!B˔L7C|mJEN;ر/6Ng E{(qcfKNLدccFTz!Zr7VBqLhKv?fFXͦ'ް6Ys_AuQ,9N2Mj!uSPrOJxJ>h{q1C4T#w' ;F~|?i#qWY:ݗ hxKg2¸LOó'w{ y6m0nnܭ0!(yN K V%cyrPo0ĜlqmDUqoHGC:]k(Z)UɢoAGTdK_V?b#~{P믡E>֞?+pV"?Y^kԅ3F$z \Z%!2oubĢEi1ZZS{D1bVnaaW=3RՄW ࢆ8iF%jA;%K5n"Dc `\U,qeKDk95*ęf"ND-GZi?RVA\|﫨\ AǚWT:+ *nǫY:y-۳9&:,V|܊!7iABw~ZYplyҗL2Aͤ!^zr^CCe` Ķ]VU{u GZki*n E1xiUVe~Gq-ـnCIza3qMwnù̉pwϘߞ ۹jzv!ҿ]76|U=7}Ê[w3{lW|rq]:WI]5`(Լybua ~ؤ ]d.?;/]EuRe.=]s.KK xOD *4l|껟sHBuuuT];%始J(\ ~DMKoqG p$TF|pZVm/r^~EЪx!Pq_kNScj$D'[mV/kp]hZtj2 xCr7p,G m/ʝ,E3D΍ɕ/܀#81 [&Bɺoɞ8ԇLVe4l9IN;|.%+od #1UAՓ3#No0-S< {No s$On(v傊;➿_p0Dt`8vZdϳdϳdϳdϻMkDa@tA2P 9eY^`DT" x%'R8v78 ``qn20Ì!0Z頌Nbe3RU[W^Б4V,:wNΉ z[ׂHSJamPR12BF"cYk:m ;zj}}?̀bSsvKjlQ1 i]%̅1A.bW 4`ysnb,jN2f4e.b.gRVp%KJ&k*7F3U+K <bBs{S䩙yZG4]Ry (Ӧ{2:A[$"I!u>`E{3YZw{ 40$J-3I hIKSg~Yh@HFj# %! p2yFo3#q0 2ө,akq4Zfu1Ӯ$AmH&/̪lkd݆[x-&6Onc . i3@0|Gky:iHV}\,~߼=6K Lg\r3pŞiiX`r 1K;J d4cA[!1wIZ6Zj}ȼf|ׅ@"?s IN TI622b Dsͤ7Z5@[rn6hw3W%,gJ\+-i,4AkPJJIdr~%J&4(%DԯdH"é'eխ%DAHN dgG/MhߊƋ"- L>~|7Q{&p9QE7"-y^7W׋?n...3[qlo_l/7S Vo>\4OZ"wj#l.ޅ Eoڏ%VИp8 %o<\=f4iӽZg[`ez0)i:ouXVxS;k:j{TP+aО9^8Y.ٍ,1:>E0H5\﶑!IXO귌aҾ']ۇ3ƌդX{sCJ :IptN :V5Pe'_;6xKGBMFZK "EFU*rGCBZM\4i%1rTPҜ-:OJТ,!ύֺHGvwhFnAEuI tVg U(Rȟ;cOuyG<I!&/ؿ5ٰgn yװN0"FfsZb0JXqh08KLU,R"Q[`|XۘLoe'w$[FUicY}nhZ sуNGV!I s,m։T[tQ{~P mjdDxxHԩ3+ %5cZ+@R;#]QḳpH1 mB٥M)*wrF끺.0HdfnsfUQF1K䭔"Zg6QJQ "8VǓJX6k݉^b|Izdz20?v'f޵6m+чX2븝LړK;&AQc[$;I3/@R ȵ$E, `/Np#ƮJ"#@#pi kkF*ڵdBB[`K\ceJ0aE@;L=l˅x@K B g~DΨhyE A+~)XBDD}ZZD. S A#‰uATԍVn̛n` jo+mֿ]-pHdw7z2jFR Q,xsnny#ݸ]OF󈺝{kߩdr"El*I+gzN@*N+ڄC@h@~zEjx˵4g, (t70aa0G%`삉DwdQTe_v܉-n*oA p%3`'@HC08@Y%L54RP'`aP !<"pT>Ӥt9.63 CnhcI9'2KAJ*lkx&.}SNOzx!T`I 6`(*pNp,HYX.(T¿Rf苂*e|kEN 7ȷYφJoٟ-%?! h2h}|2" b#L0hAp$K8 x6OivhkmƵy9p,ڼrBD},Ц(J-ډ.2vu]Fee2!דgݢ8J\n?Rhh"uw5Ac6__~0|K\Hц\y`R䉕'P!a f4L؎:#`"{b# 2 Mhi`dBmfHu}<HHIBRHآHOnWqo@s}9.Iӥq 6jEq)1"9劑v3C0t19 Mx{+L@i Sq3`u`M0`N5fJ$f} ,|G@q(hYeUR;|8음S{9o{G775˗?y^_/'8Ui~c-໥ _ՕҞvܿ{nϿ?o9o>NO*WI䭕{م_GFܥ&VMM6$Go9huyƣíuNxiR$rGWd`|1}{= u>1v?yG9Hm+#| h"\oe}^ud2g,^OPEߗ2`dsih>t%k|b0oЩbl'g/&]*KQ|Kf&w9jE3ޖѻ w3+yk[p Qdշ /dȲR[ 6 u ,M[T*ػ(Zy_eѮA7^b9pz[]dzqGXoS)o,uҋWLzͤˠk=*i+UH@# DuyyDsfDQ:׾J0TEP *}K 93Jg9Jj;ja$RȚ#%q̤ Tc a1bT_Y6k>XC `9Ol*wUz!<ށ{x<2Yj]{M;uƏs 攝`eөc}^(R<}yi୚eL Pfc*ҐGA:%$ -ZPN;X$ Yr֭ y*Sǹj`Be:(c\EHpc-Q_VƎ+YJM?#YBÍ'<GyL`BH›J?n3@inE<q4B_J3@yyq1Q&cM^6dA#u=`~rz?Skʭw5E90YƝÓUdxvyLH M*!%!)TCYXBU 1cH,? Hga $QR`!eĄ` MнFƖm ?W0c҆˓B5 HA {hkt>` P)zQh-ڜab ,GJEHX XH_`J &e{wV:sӚ3`\ VT zh} ɹf}O,V#o`un, E@~׋<qbm{ia >X2!Q%N\R M6:73&iivjqW忨TfdIJȚLHz2IO*0J"`[y,zb+)|ޞS ΛCu4Ә~?w;5Ȓ5'}>V>no?)<[g^Un< ri5 Ā`,4V2ևQ,ssTǟeG\Jߝd.OWq:c#qzY |~w'F6%R~y<{3 ȭ?=}Q.$Ɯd]yݛkyqeW@9Hvܩ7N2O9'Y5oBt \i06 bC4 FPО,g5Hu.=t´PyKq8Ek)N:¡Fl$0Wqi4WH#lԘ!F&R0` h-`F0#]Ks6+ ]&j< #1 _fB`KmVݳRIb=(Uծ-/$/Q*QMkœL-oq4l+J;بщPvi`. xVvs#'%c+hUV+VuP3@:vN7y]BڡZE[^9-Rhٹ$V9Uv"0B(9iVˍ=7),#qpvCr1aĉxPiN>Ul@V-I>, ޤheF/ìy@75(0h67=ZIyP }ͼ7(t*T/5QE(F .1aq8&(΃ F<*< @E@պF+A2BTKʆVZv\yp~΁%MnMC2VraI0qƁ`S*Լ `as g7KQOc_wJN%,s@KiV [>բgP|1F m(4H;#1Vυ/lhNru hL뼘'lcqur&$L+{|Mv+)-ReZf~/Ё&wnic"O 9I*т| Z!Q̨r3aGGs^hwiRLYo7{={k}!6vhtdƚeV0\-R~4o߱z8W.ch]c3H$Ե?aO-)x[o(I>}\3? qd3bog{-:;Ɠ,ɕIX"MZ1Nk ԍ {gun@Zŝ 4P-]AT[pve: zƙXF#qN ~~KhW=SЮy$ɳB`F1F 6Z|ZD̘|A;)i/EmD> [6I[_g%+dq)D8 ZSE͛fpy^IxCQz!\'mTS0 `[QS)M@eDe6=p#eԄrrؼSNb1Ж]l9 7`WbĤH4|XR'G6~d|Փך =t~iIȆxKH}OK+40xi}vNO+m& le8;}ZSt뮗s:/GnNd`Oː!#|dg s>:0{Vzbl/5^$kHxѶh¿Ղ)A+}O +H⍣X˂pPӇ$쏏g?<<-"JY>/:ϿҐgq52<_G_]ldქMy5\*gnI(o#bd"hfVYPȘժ.=JL 19"HI0HT9y,y _W.$ 63Zw6xB XA2GJ&cR匁iv R\\LX%c"iiAZc9jìH*?g0HlM9yAwjZ-ZӪ:EmѼeJb[@W;[-7 L+\qz vVR(f>, $4Z丿~?A+*V(< V Ԁxꂱuh7/k5z(&+wܴKZo9ݪGL(8E?0!h.| VX!眧47PhۊQ4/Ģcڤ>Hõ&=rly(Sz|} kSA4kL@q5 Lc *b2E?ʯ:)[]qR4(Т=J 7.5 ƣDZ{J1oOҢ51&H깯=R7`+ rb1Н{6]$ymߦjk,҇e@vie@0ުC1Q&)"4Xs\#$mc$3\ us!dr5cCʞBG t pla%0oX<OTvz>ey.׍Pvr|z*6̍Qsއ~|i/bԸۡ]'z]lUAZLG|1@4BZ/1@JO 17EC+!Rejg*a xomd|ʹ'OGs F@#+/LRI+׊|ΘC{+y|7vc{oqԟ5:q#;[uD5:qc.St .A~GMιK!ZR31eP1(-2i;tQxK?2rcF%4ɲ^ nH 4vAX_ |Fq@SmFzLSs ׳8̤e7,mcwe`[00UΟ|Ŋl"VNpˡ]~'s~p\6oز-1aN#'Ȕ<N#e+KRcF[<Eg f솯G[}eV|EVL P*yrZշ}Ҭ)S|og4i%[kvj{><<9OW`zw L?=R޺Rpnf T`tON(*ݏ!e!1Dt_{nr0=ss-g33l #9q֪9ZL?Z-dFj(&?ͦ_5~((LUYn rl`-h`o8p~^GXҗ7f" \@[(;(݈FCu4i 7B E c9˃u'Qf ~H()8=fhe!eds=cI`NגQ3r=i"I)WG/5 rAFɃDu gM6!ZGNJ`@/BPI-FD i~f"4."6A:h-ךkuy2+SIІ옞 R`T2rZ sZdKǢh$a4^idナ4HbSi" u=VՔ2] A|O&pvu~/@qvJ*L7bUm{GjX);=ZHP[U`Ag4_ \$M&뗶[0ny%I(0̤"aOTɬ4Ͽk"y2MgRKݍ =l" ~s *ZheO ݯ:m?@_4;ÏO[`O5l7iئп&2h)#xJR!y 7,}PRϜP)hڄ)@n&i˿6J<~E$o_JçhoATciɒV)]N1FX+FV9Q5zyC^ǫzM~ wY]9|:Nf]j+_TS%{W`e`%+XIrP4L^8ccr P:9YXכE8'-)>p 7oeiZU;-!.wr/{4c;Q}XTYNn񁾟an\}yվڈU_nܼm얨xOzmӕ'š>ט (~rKf/dcp᪻wۧmKMާkrD)3[aB5:hUЃMb*2F{ڭo^651d| jvB1hS%.=l1ŸHeVZ!҈BD [Z S'RXPVcA”9{I Ol t/0ܩ9TD|*lGMC:j/CI2&oOcv o?,YQ_UVgدnkpSPzZ@j[>?P3Ww}qT`!J*1VYJG ?_=UԇW9Z1Z5<""&C$lț(hZԒY2̆d,;q+!FuJ{}b9\*e=ܥ^z!׃-L~tb|ӓC3*L`E!90alތD\qmJ'| crYȹ6!~y+tBmWn6"bm(碭RDCrKx*zr/K{˳"%62~ s\DM+v7j&J)HW%F샒ݻ;D3Ohuo_(#Uz7ڡMT+n#5"ՊR^)>g̀:[^q([xLRtA(9FV8WU{`E¬7WD?ATeVZѬ]1b1VכE&x>ƱuH,.fٻF\W,\b$8$$]u2E',ߗՒ.v"2DR_H~,Ū"55^A;5_;1iʄZkZ0~SѮ~{ƩsElkĕ"FT&I4†Ry%44!;҄^jcGk5?~{zA17An1?*]R<~zt5>?pq4L[\ٻ/vz|>qNjW'v/݂yƵKRNjn8|1tY-%"P) QRkHG(Yj`[7DRdԋvӢ>+q.X3F8D@} gvO /7gZ B6tMYݗ^Rynɼ$BP,Y3,>ZgɪÝY.q$6ʂh;^.Hܤe=M!즻r :@E#4$a0d !~_(mV^K}zyMdrbjZDZ2gcB%mh(Ў_- ́bcuOcB$ۉRe=j J8YyEr?_)atBF-2e{pH,d6R;c|i2Ƨ֍ ~:*JwB 9!:Hڷ9F)"N#`hJ'䯓чWnJ#G8gӔѿͪwfgqL$~~LEcJ(=&5E^sm̗u_8ֲ=10xij.}*guF;xǀPjW$S]h@SMmi!.7Rn]wvs|Eu=h[kf\wMs.ޔ]GSڹ0O6C^x[쥓ՃN 4Wi܍{i]m(}iAACdlLS"aՈ͋mv1!H fۚ9IhTe:~~B] DYiǧL.:-,4mDvԖaQ(Dn -15psHt֭Mfa $]Cp~CPZ75&dgZ;'㬭ٵ%\~~C҂ SfkIw\!0LQtDQT_of9˔*d).&12w4H3Xj3:CltNT1=V5h?[G VÕukw QSW2@QQZ%H1-# ՚A+4Fs"+( *-I+=5sޜaէ쳆m60kX=m_`Rl7ٺ aG0@{,h*_q{h!77;ㆸZDhoK,IU"ӋO@+XDY<*J%;I(FvZO@g-ж榍H_=bNI} 5( aJUbB!ko MVo/Ve9 xE޲|M#"%7uXC>4:̄$J|I::FR?SVp`luMS!j f%NmL3fv9op--I_ɽY-x6j!O >z_o>)d5 LK BL߃n#~:s54087tOdg|_ /iſ,Q8y(C2}k1DV$yE|H'dTUCc|s>cᐑX"zͿ.)q))q ~:*0J^J5u eYPxf PQeW,*]XpB:}N&q3/:}8UNSX ۬zhv;hOBo\&~n.wؽvߥFOSj4FOSjkTg-ev`uI $CCFB2:nZiwۗoBOVLzŘFZz͉rqARwdBE#/~<\XJM+4^R-1j4⼞5^h %ƦZ&z)BKm)Fmq2TkFЯCG#I@L5))g8Aڠ 6+@):#4-u@KJ8=p,v1b-pޣ 3BYjФP9c7i:~鿞 -I5SwU9yգp;F(ZӸYJ#JH0+K@(1>っRHL"3ΔR DF팂̅OD?+P8)08gi{M;TXFC8Mj%( ݯ}NwB\RAVؼ"ӊ yKV%Y~Sƀg{m}hj㨖': V]L+͍:^ a֥C^ߠY%K6!r!|U1rE!i4:=)U= "u:M l=3e1!UAg&[kDŬx`F[!-c'=S%#y4be%{ڄ!cj>ݞO3PU5XYt{5}5R8-TGP6b!8KQX^B.@GW`x68Ulx®ɡdF /`MyߞC.A+eݓ#a\ U`w0}:!q/w Sƾîs8f|6tDW 7ob/!e<)QߒTlj)ʹ"]1p-v"D鸌梓i=! U`S S!FKh_`weU,r~3/q_mo[9&k)%RQkKKc|)7R+Kϥ,,%K&SfΖ+jE.̥BTVyk5SDzN5-lB>1@72[K|Iɓ cRk]Z-gjgJu;uJY՚qFTėUfʙj6_3̴G s&4ksN?_^qޜȁQM$QB=5R>L?E; .,N/x/dͻƣ\~{ǣo?M._n`Gۿ[Hxi.F_=8x {tb)?]2U~8#}vI›8 1]d#6z|7 WKfGo+϶~ YF1'D:L@&DiH2({|j:\v߸YuseO3 1F ͏Wy$Պq&8U*U:NN+QV& w|6I(^-Q/3dc Μ FbQYP<?+OaTM}{Z%hr?z}gO:;i+aj׼<_?}ޞE(my+lc:eW4<Rhl!pv^\"g䁒bwkiZ[Yy3[d !dٻFW ɬ{~dNس 9%QױlHr9AZnnwlٝ$[լ,bq('N~TDWwE*hYz-9LgUjwl#@ĉ?c^Jz^ov4j^G.gVy;G׊ W71ǃb6a84Yd2 vb R&Qn2|8y >Ϳ/nA$hf;%m]dwݮ T h_U2+U8!be\_~8$mo d_ު["8iޚ[b$HݾԾqO iT鎽m|{ PrkϢ7vhq=]a >|;^lvr+ϭ̤>s?-MTvv24薨ބ-1ZZt{0:q,9C#+IP`U8:0>_IWyIUޮA&ms )S!cJ1*đҠ#Fpv5f L 93a^'#  ϗnZ݁ĥ#x@󭟎zi=wb볿qtŁjlՂ_Q˪<ތENL8#Y:AWL(VFZkB,%})9ivSrԘQHJqw9G}'noZ yK AwC JN靡 sEPkbH WY#*MKn>\ INȖA*hBRVE=m&ّ94~Jjq'<;OGTasFdѤ͈ G.27BDMJ09XYTi[1)1q_ c;m?]c¤#x5 ӧwr%&uϙ1愭>J9U*|pF ȳ=k@="σ/h"LY-#ʤ>'(}B.K󛉏3EL(Rk?*1Wُ~UُYgQH$ycY B0%#XJ%>i)>&ᨴ1I)ς+ǂws4)ڕ Ǣ 〣>q & b6(9X\K,OfXwz{:& ת\`U[ sxf)rAO=}2v {UM2Jm) I7=I: E: lr| $qf gS#Kz$TeS$ e7z"mהɪ ʾ"WQo/n*Z 5`H/^kzo,_FH`64cƅQDs6}/`d:~ DxN*IgZ^kYߝ@g= Hs]9[ǽM:nZ%߷RІ'ChV00~`q܈7y^ G hlox \吜GHEeL@zrXKXZj?JE>aðPRsҷ 3-Lq%1pX >,T%iŅ>KRHggm%ך+'Zɲ`.wS:sèDMK?\]Y & ,du[ЄE)tpRj͋k#_ 8&*2ӰdP*Մ\{Uk03]]S`ng:i`-OBR?>ǹcEP A1& "{?~[XC[?bܨEl_!n2"P.J ^|]͇DJ}-ÅO l{'R3S1A8W*(~~{?r:û BoN+/Y\4|7{3XSښt?G'zn݆ϗ @! ͭ}ޜ}1np jx/er2@AjW#\ '@DoQd^#Ã#!o+ OF0+ N;ZMrxJ,> &SRtN,+O\hqgq5ܦeԪ})ڄ@PDdt;zRhڄMChwf֧佧,gl*RdL$긍 jE7~.VSfK}=}rsKem 5Ry!7sMC6{RDzvҼN61%6O A}X8EK$4rLH*W'W)jG 1Y|e#(?,o'ƺkvTo4^KmȐQ6kK*!P5!*+@tTPE"qEt!!Bʾ",&xHT s0,ˤeoqC:CY $:!OA*mb I' t7N+XSPM1aq0יi<C9gZUA+ygОmKD § "q5;\Of%ʈ g`"3_@9Uho h&o!97%nPi(5Gj)8/qk3v%´ʴ:ʫOΩAO NL|zcqM'NϨͫĪtMKӶw}Y;c;?1SL6/F)nCQpU6vq hgDZ>);t xA<@qP՗\bN I"ۮ#T/m߮Bsm?!9<,dé|MT\OԮ2wg6hL?_r0^_/~ "أ8WUD|/`oܝoK9cL*ΗbDܓZ1m1)Qye1l1ХBI-Z|`O|k\CwѯJӘ4|ҥ-Ǭ i. .J8`2͘KXRE1%k$pndɲ\YJi?qDJFEy3(! $eHZ#0bdiP &rsD^𨙹I%J4)"'+0wl9TbLhOԨ.. 9XBTYRN9>"8CjՉ[ygb"*bASh9cC42,q _ 3!OC%9GNrwp禶,&tƁPsNiˣvMmdT04 DZ?U$}=Tҳ(8b.7*ѐY|m.(N : MSp+L1(9H9zD'QLȕCRL.$#m=[E2z4GB6<8'I`CmKHJqrInӊ+FvKɜp"ʀ%JUtRS(-hp׬Cg1Cv*6sEȑ ԀWh#JD&:-UHHS|Cٚ_Tk1bt2ܞY21Մ)TٺʀD:dDk4Y0MԡPEBNRJ,Iz(!FO4@Թ JSKD;Gu`Πĸ%ĵ:/ R_ap@p@H e,(2WYi" ZF@E& H8R3Os)h+EP(H+SP&% HIb114qe4 tcXf Ie@!H+uT,#M`t $,FcwG(uu[ F|q$uWgl$!I$RpZfzJ;%j}UEhx=};ԔRν${aD^GhķAl5٢SoG#_jMt-3{P-q[lm\`P1駋s6U-צ|Pyb#1j7eElxZ5ov *; Ł*;KrH#kJ,)__kHuy?ۛeuK5gџݍ\|Y bGO8^+d#_֒-s奞eLq CʨgJzF;7)Ls:vnm s/d~0 i-~@AmvS[N,~=A+1>sj;1jh~ȉ j} n`h{+_ݩy;?bˎwLL,R=] ]w??%A>J ǯk7Q!Xm+hnWc%cSrZ"Lb(aj4Y16bG!t\'jjL!: &&%:GK.٢9"Yxǟ"dԍn a| J Z J9ʿ/x2[F"EiӈGHM& M#i!䶭!.l=(}:p;;ŵr {YG(0j8ڏ"kj#Oxay2BԢ/5 ml [jԷV+fs\_Z}t}v_ fw礚j~N9i'ī˹U]&Y9h)RGgrq6$W !L)mVÙ[ogXEzT6og?.jDgJU&mYsv97^ӛϳk(,~/OGߙjO?ilID)\Ck#%"Z} ၧ(-V(=SRlIt$︃'QjFOQ0ʻ mJ2U-3k kQWL93yQЉ\ɒ#|JxD"szTӌSh6a(+6h Fa-d xR;X)=P(I#x(yba\,YddmL 8]cwҺ^)cAB-#/X Z%/Xl}(:#N# $@eFڸb@2rUqJ69e/wM! F8/#ʢJvށMd@] iygLQU 1@ ٰ1##6d2.=C5Yx*&$ %I(Y,J=iZq*D$ot'pJO&عkwi3DǾثڨ#µpIϘOUKn@%0]!$pdT)&boMXCX٩ȋ5y:ad7M6fY3oB^pRY"Ѱge(Sl;1 GT "+r3 Q2y a6qڱ#+=Y87Y =b$،p&Y*Ǚ$G/oy!9Dd,8h,Q̒#myϰ"y4Vdgg^1>oL 8L51s5K+ G_졠;ĔS^?9Xb6БpV3qKkjs/:"oI-mtOp덃v8@!sh/N9Ȑm Rʎ_Qۦ/^V>غbvlwIe 'pzgR$z%3 EB*1xN2πkɲի5"A=*>ܧWqaӞk:`EeJ"q-Rr}EEsP6 S36J&{)QxO%%ZG-&犑i47Y/HjtQ=o? 8&{8`Ji}>8#g6{ѢcP. Y7=Ef&pnVCW;VhX3q8F9QѲBs T&26樂`OzG{l8߲}C6sqhQI!vRWrgC) ۹HS f%%CSmRDEvbdhhO3A[Kv i(bRȁWFGǞc]8qRTyFx]q 89K ^)Q Ig ,QMʢ ti|7X\ǫ/vOS]*|:M'WӋqH~臯wLo?m+?Ӯ9B wHJ$:U]4ns%'C7Vo9X%ގɾqJm tgqC-y7#*ivȤV4˱uL854dd~>.i\eՍmݢOI F d1Η,>͊ !QslVZ7 ,&ΡtUii&$"۷d-m&fGgd^̶:xVïg7J@d3u6_WzuӨi3I}1-٭3eL7mS0Wa74 z:Hf=.C0z8"VbdьN'ONT<:y&NhH䨅w!G(K3(r`U>bQ5_Y}gI'KSwS3?˾՗ϋjrEr# V!RL`A)-X4ݲW#&-Ӹj߸_;Qq!j[mf7_CZtivI权A煝~ݗaX_9M+|q>v!\Ψ`([{@ÓB[ 3;:%n8 ',M_s;>>\b9,#ǯ7jstKs..af?}tz5H|~w~g_}n}%_+O\j~IZ3ު phimpO\DQٮD112+zW}NwsݽxvRItzp9mՓ\dO䖳߲،__޵x9孛ne$˴h߾hao> #zcAA= A`P{#x~Hpc3 @ʩ_rmʽ#N)Vjuc [AK}u0ui1[ZО>2wv\T>y}<sRDrxk6˿/M+ztPئږMQ@DT) L;?5w)g8oNpK۩ZUON8Jh6N:F`zHɍ[3g(wB(4"(H񐂽R^}6v=R1pW|* z;8qK7ݞewR>qKiNnw-;[޹Wc[-%5;b`H;AEl1ͯ#dmrҒڀe.qK9q +Y^˙ǧzۗvշyojqLɢ='GӀz?NbAjN k^OG}u?ow4h3ŧC֠r'v~K&v+eh'DQfwfmH6eT9YYQhG!3>~5=]?xe ~eϡVqЭ&Rt f9y|J>Wa߬qnY?shW{r8tM)=*ŝw9j%X%VMLjB&E)+M{n $%_ږ_yiQ1 R1~75EUT_;$ $P)*Er KоҔ7]Ap ёnY;6Z&\A.ײRCG%d||BjO8q}J%R vޙ>U}68E񊹩ʬRiR %=m닮䧋g2\W^TjW'&U/8T\)5V?RkdZқ3vsb_܂9_bn.V/A829eK6]r* 6,y/ªL_b`馮9P̕ɌDlUZ޸`Ԭ%D1(39{Z$_nh7K, ;9CЉ9ّ.e)1b֩#梦t0ћX]ԚȺ ̹e匲ZZs ٙRɂ9_uI'&5*[m-5 kH *05dv Oۣ߬i0B Ruy Y̪AG!QDH/zt_HÚk]҄, $g%`T켇eR\੤]vPoY!".%j8Gi̹@NE}orqqwfk)-S3\)-BQkTA5dų>R=0ef+Phg `S'%Β}ϦzT\-̦x)N7c 2h r 91h{)ƔX O,!ВA_Jx؂3q*IGK :)ԧ5tMB Gau6lI>JjsF׋KIKK:pl}6#jucks ICX[UA}#:jd&VoAA5q,gJ(\i(ε|bei-icRWb  8DFPAr%B ac=rFqhZ7tV%"h%Cn&fx"HAZ7)RF+abjBdH!U-2Ti&j̲QoU`?gAҬ-!]z.4vJ6H.P8z٨w#vTq,1 ;,I&ضO5[NFJ2RI(3i;Z8ڔ b! 1q 3PImdBT/Y|lV*ER3M@R ASJ92q3Tμh=Q;pHMo#9Lq%yy| 7K=-<Q(90tT"Bq gri_|~ {k%rqJ-֫ȶR# ["TJ6[,FG523Ƕ;Slv8r:мSjhQBH΄b"-8al3YaA$`OY` $9*5A(@"h\wj m(|kcrOY$GV">W<(-~V9 sα/~VƨS99oON|E&U/’rvE3dے(-JyL_yIT/Etd]5բIG*5D/|(! GP~6 G5$LQsޭ:C IrTAT ܿb,}6 T4ԐN[% zC-Ehc5}KK .H=cMّₔJSi+IlΥǻɦ1̩6 G=OXJnF'"*$i\%+(ge5 T詨[6ݩ:>#W}C/XiBmdTF֪R)5|(z %0J;ens]sT́By܏w!Y+Ւ':3FnY$&S {lYBEYkED6hɜw,ɭjT1ړ!< . &!hOBXOBGC0j{ԓ>4^< A<g''!pP=Q,h^vuV׿Ps>,ڰ " k1jq6,yE6ƨR5c2 1jV-ڰImQbTQxW訷qS7Va*w .`ۣ`$w#< A1sy⨴OzBmXQ*,ڰf" k1jN/ڨLƴj0Ǩ~)abmTQ(A>Fm5tXQ;(Q0B-8}a#ʹp[)(j$POh/t;U(G|z󳽇_O.%՟OY?vR,欞E\\]tr&?׫KȪJygy|(E{Q `Iw}&W}\~ruupNn^%~\|˗NS:UD|#F~I]'OnXaȌgk}?Eо{9l3.b(αw?IA<ٻްom2~6ryol߿8F:WѦ`N/\KՊ+yJDъXTK;j[m߼_ q뻦s-]T;wbOBb TXԖEf?gԞ^Ȼ˻Wu};}hlu}_oμ u<a.8l\W+5:C~3F \qm_xOo<;J "ys- )y^s_ 1!|x'[Ӻ-'GF1(m=O<}psG˫x֌|0*G4ҝu# ?Q{#I&}qz+~x>|î 'fS `ؓؽ?ewwt2u*oNk+NfMoX8Yճή=~,8AqG;(?4dK WD@;폁y)Q'sMWZf3^P.[{Dwb~TfR"jN8p'c-:G[q0V^q'Xq0 G#Tjo8b5vK8p1bD5NE!Z(᧜vO  +]]7r+G&Y$o$&y ?ʒ29K-ٷg h<#/:Eu`MIf;}#"@Bk{\$ (@C#Y{B\j6mX+OȂdEg! /joj?>:6:VfBgŜu]27&ob~B6amCx`tчNT@rcLiSY 4Cĺs`LRS731H>蚲n1 +(1jշQe"pL@Q4<}gtYM<ktXI/֦oH."pSckx6r,w+g9B !mR0=O6y .ܻqP Euoɩ_>rQsLsU\j,@~(]T!W3]t"45m3Ύ:'Jhz/ @%ԴR@`,V&`\w Qs(Y=*jjyFVy<` yՊ8>A;9 ޤsy_t./5-)i_~5Y̦և~ߍ#8'gnmF2kOm+nܹ*H1yM+JAmxМץС9:4?}&zɼ1̟ݽGS?h^AUoys/ww3=p⦍;??~ͻ7ns[p; ϫ5*L/C\\=kfk^񩴃Sצ ))o K.9$DSHJVD)1)ѱo!]b=8Zvoj!#\oSZ=*ݭ-lEecM^F4׷%4Ÿ9]V㔱,w [p t<ƛ;l/ڕ~vF!i#S}+L59)<0yo@BFt4#'eW#L .>SCP4&k̴}e\JuL:w&&IY+hg[tFGAg=>jPKg1 g s9 xzX f.·9/8kV>gIa1 Ṕٜ/9aAu="⫛yၛ+st!>=d 0S9R?cd}}R&oSLywz81&V洌>Oi'X7-a?>ݧy=:4g=dnɿC'U9<+l?O==q&м%z|HfOτY$ 9Cloq"ʝl@ jٜn9jn˼܎{~wxr啼{ׯw:$=k귱{78XsϮ#rYvLVd\w,y@ Ӟ &VNF.8&.,.cOg{axTEGY=;ż,Kߥ5z~F`U&p8ƍ4R39@xbI(R%w!o]`APPuRZf]xM-GPn@>RQ%9Jv@׵bItA"0BkWAǞRL4G9J1Xk16!ܡo%\M-Ve{ԈފjsH(2cTNnZDVjpH=REM)(C )( 2jUeU@3o$B`de[P/agQ jqzF#`~FI }Rԣ+oޡR[Mkƒt[Ԉ,(5^%S٦.L*^,w)rg$B` 14F$+cUHt"Q&J륆k]|c\jodl\ e-XؒP5&! ^"D\r>$/6HV@d,[/UJС^(z˥)ClR6YVFgFn: Am(e9һMR 1JB:3`kc R3Q==VJ-kESWf>М@D- Fx+TggR0!ڈr&DdfO΁W-XxwH`y״lp(x Bެ2.$r_cE%Y$=8\ͺ%cujZ--"[Z k RCQX%{F@W-`XQrhӀԠgREBhK}R'-J( 0\[_ 8KET[VHBe(a@XG|) (AjyACV ",zs^hkWyW.R>I6;ukyJZYt|UoxJ7ya~U|?Qx>퇟~ֻU7o^,u_P7羿涾}2C]~Cw￉޽/#Iacޤ+ۂ:NYLpȲ˶?S'~(/GɔsȮ\p䄇k :pt.Yx>)2))n?M}z+*:\pum(:ş 6"ğuaQ}?Gf{pyWo6XiW~nYcn ȍ^kȍI w vtX}dB .8ĆS5m/o \>>BnuR/y;bDCƀ"gKd9aQ 'E#|$l;%Uʾܘ:oRt9<oo-Kg3HNTo1#-feŁeۗąkp!' [:2}:KZ"90oG&T<b5!sDـk[ F|7=&*[|j)szz\pLoX9r>~ʪL4B>rq|ٻmlW4ݕwd&qn3LSv2 &ڒW-S&(Rیx<8o}yE$0jQ7&+͊vIti,Y)ڧ!ƐnWbi>"Oz>&NG h X-aZ$:TO&?חu7O_W=Tڣ0t3#rW}OrrIK$qiq{= SJڈ.qqO K mUR,P;t))Ph_]ڢLwkUnq ݔ7.AUT7^)t; x*ѹ>ܫv68jܫ!}q vjDhKt1F]i)Jt|:v$I`:l)z3D[)K9Ui;KaZs,CPd% =M MoR#{= ҄꣡?SIƛ|F-uT&I=N ݏ41jGoDbs-eqWdz5?B"vi<>Z7{Q^R3V:is^u 帼::0v>Mp˰:/gsHJ})7?.8Y;.~o$_"\% uHnQ.I>;otswoc߽>#\{GB!0HN/?bU lĮg?Kz8ǯ^GŅqɉǣg =LJyNgglZxvG:-BtOZ NO U.zj/\&El}&&zcC-Uii@/zi* $yfbr6)j,xǩbh kq({b~Hl<%إF`4z_EmCwp0PƒNrMZeQXYԺ01\DKn= gBu<;2Og3`C3-. `J [.+s?z1Gn;rHO_WFz3Hr;Z.2\Qx*[7aZѸw E"wm>ZQv`=Li6ULݱ\!FdYi3 C,VS:/ C^R {CI(xs")]TX&zS kuJeZ 10""F+B4xEËk4[+wPGF=Y^^Y@1,/r4dE+-;XږWVk]:{[GtC[pjgyTB-+{-/,,/R*BwkV\YIծ֑TǨkۃw@]_)#2f+N$4'-gU)%OUK%`ak&0zN>J\+riM 89)QK'ClX5r4HRx$}?DΪp/՘)5$ьtQwϗճJx K~V ICw[~z7!Wbm  iuЌZDEĠO/3ȀlkfHC-4jH0j1xi;M5$*!9O];GN,΋!KCf<* Ѓٻ1@5}ʋqRb-UUVMTϴ#`ea{W?~|>d1nZxOg/_.>7sW꟏9%4X) dƂ!pɽ^й. 0j3I}3o %)#P . Q@$5Jzs8O90os ((0CI""  $ˇ740Ί*vn2QW Pv^@t()c諳ƥVEK/^\R֙GlZ_e|1oFBo ꟛ؈#MW!3G\8$o]-~_HlUay3O^'3" Ø4ߕa󫏳eiǙ՞_ᓍX4BcAc_ckl)]:%RY/ݢQhLy 1}bLyFf!n~tn^ʽU\rJQ^ۨ0M'OIOȃL}:vk͉7o8JF*9B ΃GUБ [@t2*(<\kXHPȕةȝbR ̭+<~O՝ƽk9E{+vCOHPF2XS)o>0X^*NRkOG?}TꬵRώTS݁v2n%R⌓y鸁`23Лm˲R^ zc9: TcLfdDR@=ϝr:*ͥ́*aT5ΘSxf0p8Ĵ i#dFLH"`ĀC &> 0a, N#Z@!e11VS66 6:[ $\sƉ @sK2j14P21vN5N2xtc NcpW*\f=7qr)spDF3?pDx!axU&!G1 ƍUh 5Dfy>4܋T Cs*.Te8НcJ< .OIF T. Wps{%a,jk]W ATQI`Jƕi# +f(L rg ?YLZ1w sm;Y8Ur^m&#G1{U=p0#FOa`lbv/`1yGs53ZJcH[&a. x;KPvOIWBl/?Qc=w}1 ݶȔZ;7^Q m4; BM5ňLr"2}B?V;]5Y_ݓU vW.a19_sQq2~7oX =bj\zxF* "\l5EFPO_~&Gv}_[? nb|?=Y4գ.Йw1K,eF鎸F/?2CE'Z)*gjP 7]2ߏdiʫ*V $>xnB'K_ܴIv 1#Bt5oRO.x=GɍTNOt8EL'DQ)ߺ~ԢO!7 F#Y0õa7cAu7bA%Ib@:F@ײ!TFCTcç4JbMkBPzS B*Eyǫōu~=-Ǩ2cĽ] ,ωtZ \cvf5V 3^qaPՓ@xW!,8e Rѐ+Ў+inn!UGp"Bǣ/q2fW$j_Ovx#@I.Ƙ{f,3geJg7BK@4"$3])&Z/k:B:AbJn^$T+@^1:kdJU 'SqO5IkvTD+ )_ݵAXEIpn&{qzCzSѐkH+qwμ!8ty7'/#R.o`+A7R1uK/T,]*?TTWE93Z0oK3 ׂCwV'\^Ef AO8q0ќ.0k59^YBWy]K!h4'\MY#*V&$֖NW"U'I#v#)89:A>I˷[M*vbqJT94vgVe7q Ѣ'cA[vCVg=)CU $ e*R1+CbV]Kic04ykQdaz*_HM"k-d{G}Eb*cR9f":W!X^KI|˘J9eC\8'kjֺqg02p>~7`RrϚ_9:?t4b1!_e@ؙ}v;KyC83 AX7Fk+@fi*F(aݙ ,!W'wٿv/[VL@/e}pk鼜"3c5y|򧻻>=Hdn_ ZIQ ѵJER{z!i=]iUs qv]qH+Q(#Gޥ$ t|WV[zIn=ƈ]E=ˉaFkt o|q֧6vgwH)m}4kOJ:$:AbMu EgRS,*]:bIu %h?_M6c>lij™/5ŵ8g]5!I/h[ϛ@ׂ3}<%K,{qJSFnjDPEʜ0 p+W[YP e#=##*OLNJG#4_ԼA#=GjgٿrNL8T2(Gl?C}uv_؋ͫ/ϛ_l} "/%h-$A^Z!Yֵtc!y9%brTD<  oz5WLU FĿz'jj TMˇfy.eTȎ_3$GJ<)AO\Sj| jVe |^橐]^K-F}g#AHVkz]V%BZ ?*xt$4C_MDT4-6F"Hp)|~0 ŧd(> X̦HITʁF"LN7zi%aLxh9[Ϗ.bJ&/X]5a^/W BDzzdyj/zr_~J[,8c[G֭QZ)H^B9/d81i02\kI&G$5AT#KT)50A=MCX.4=4'hEuM heKIs@^QҌpPM4UT h$εvrs'hj.O*='hjݸL9 dʬlPrna$@Ÿk݆MK'[MZɿ[\'Zr ƙNA t9+> < }'aWzZi_Z0IukB1}ۛEen6UdӮ5Ɉ7qh&#$8cM04g;N-P \ߤUu_Z_7՚gs?/Me.?֩,MQ3N?wXpDu)2>ǰ=^A4?42OHFZZ!!J8XYm)19mIG""If9ӄx/.?}6*FTEbWb&9#s3h %k"'J=BmA]O4יx#eb!IZ gj[g5U8 CR9OC hepBɍ0S`j e&D')*Y55Ǿ۔kﹷVgQwσ9x7Ok6mMU1}香|&W! Q.N-iP)O4L$^ @Z?4]*L}./.`(103hZF.(g-Q[%*cX 9A_ U | 252B#GBxHyH22C1) 4 )ATszɒ1\Ơ&HʃC!RĈI#`OZEZSDzWc+Ԁu" "t1J+l![q L*zn: a _o{ s,z, SQ96xoDr$jePH8̴Z &3P6q[מΫ{oqQ &FE &80)BJP6ъ:ǣns ٬ lA(VJ{THYK9& LX .$))!^gBv J6 9FZ`X+z4s3jFg [Vh3 m Aqޮ%`ZPRkӂTWKayj(= b0Cm  Ӄ-A`ZPk =؊Pi =D(θj gTHYhFTc^$Q+PS$Q+P4Q+ JP#D8^,C D8,CD8*-CD8-CJc2ԄϹssƿ%b#h!f ,c}Dkk5F[=3䒊 _U䜙/Y(?{숱մV,9Shk?c CYa(jm16CZV{Z.W/-׵۟Xxp6ldڹTg[.@ %LmbM҂r">"4u@к?*ǒطK8.r%JF[lƓ_ 2QHi+.g@؛o%(gQk qE*{ j fC`drr6`i!T ,/4 蕍b ST= z#sjB%a/CPK׺7]`>]{MEz`ݥHvw9qݥHO/zk!Vsz,WI~l=&I4vôޥfCs5iBpV¡er|?.EN$ґX< 2=¢h,TԬL'6O>MuZkMr5O  bŎ ک-$ڳVd?eM!o\EtJѯx/õZR rTMۀ^ Tެ[yeukCC޸SMEe8]=07x&di!w cn#-$G/]2`?٥zRݭ&wz{o\%nqڤ@I՟dJsi5jyZ^}3꧳'8pZ$_t2߮J@V ^rS}UI5$r9CNq-G<$^P|?vr >D\@M2fbeGˈ9f@ c*BuGa(s Ιn=3tW}0t7iUU0fd9ۀV/7wyEʻ -(7|d<շ ha$$Y p! R:?qݫϊ{/xH-kW3A<NJ݋uE-U7(Chd=JkJi(4lm{ &;֋{ :GV$ġmݗyaXy GSѬz͘봁=koFEЗy?Co- t`t?EQP[R%9-Ғ&d 9$X-͜ל9yA~2kW5Gӌv`hiMͻ#ly[!eM|y;ׁU&,c' c$H8ך 'b%3'LO#{RMv `uTP XњJUjE$U}q5WUZ[ O VX{#/55^jʹ5:ԾCn(atoQWV\zA7>yOjM |i":{nj Oζ7ڭ p)eBL}iT*Gj`)5Lu-AB.\Dd;i7Q#*B|iTLGz`=l>捨s[ r"%SL6/r_Qk{Ȭ' B{*uחU~5P\zɔDn!=SO8{L 3E3-AB.\Dd Yn cތwHhȣ! 6;xXz"\zTщ=5cn4vSPs\qp3vƸ/q$EKNqnӾvS4e?=aGW T+˞/!.den^YFvx9,erH%e[^,_Xpq>,z ;Fŋh}^,fm/#P  #3-K^T 7wL}{/_{Z#Zfwl}fhdUX$$Ld(D#!TD2-a2E,Nb7ebJ<0 qOqdHmD)\~ȓ3#q-"<[86ꀠE _&tCm9Y&iyBx`N}!#X =F5r(geԏ 8 Ɯb=<18twBXe()V f/F4J0zPԠsrL_qP#q-9HK եeS.2 ,/2Nq`vq`"^/~ܠ$\7/XA{#QS ^g~`A0(rBYʉca,TEҰr䙬>F˱ѥx`$ڝjsrv3;Ƥh7+-B#S3Oqs7#F% L?0g(Ij}1ސ#S2YNŭ ˆZz2I L#ʿ0ô!@) "VFQ\2A+?1^ @G4R '43m k2ﭢx4қ̗i{ ی~%^ԛ%{Y׋5(pb4I(md}7ILYgσڇw@ʥ%;h hzM~5\X~TFdO> f>.,5 23S (GZPvwŻW?1LI+iY@x6@lY-kU* .v/bP<˛x~w7ebsX$ZO/I|oP5YN'_Mr~ _a6HMە2< u0~'8{߽MO]d]@v8?7HT<≠R`/0 D( x$֓\teuY,Z!FXt5TѺeφ3vr?d6)$wxtc0FUkݳa G>` 8NViߒ/#{ äxUa"Qsi9UJ`Iqs$Uy1exBA4 yyD9Q\i4/$˱ DՀDd4Ȗz FpYF([;At_&ѹeņ UĆηEÙwI%٣e: ^\`ADq4C:/85\GBjpbWlg߲@8t w.Pk_O=.'vC23e$5]"$wp2/e erf2֛ycp?|>_X7cW}=∛nf{lÌ_ b JƋd6]'&J0W)>?34P΍kh.5mWavL^NM}G ;̜AD @w޽}ٻ{wRw M߰Gn0}0R ;(`yPOꎍp`2e_]LәIK&zrE)IssuQ<b a$d&~Ï:z2m;,؇ XP#d# RR}Rbed'^d'6;<"'UMcmLsMUN*)Erw{*[郼cA}MOjuPCTY1Z#{Vv5{]mC]L#{˔[Bb_%cGvrw9`"l u i.*jY[FpNw떶RU:;¬Cr2 TKSY]1H:N ){iŸWLt Zh>@]j 1(RWQ:ʃ)un4Nbhu"@]Mdsه4fuDgEsۓ 7#Ru$X8\{ETtT^\5=AM)J!ia+g~l\VF`ա). XGuSd[7ZuSFoP0Υ%E9WW ]N,퇉e!KBYȸkTbڮUgڶnez5V!J /dgE>j8Vϡ@EPRRi%Vhd\MK>B{B*EU`(Gګ!H3 br4 0_>Э{ 'Uʋ S3SmϔjLa3w #e%\vߺu*ـpF}BOZnM)(ePL*d:n wK$c.0F]zJ5-4CO#0do9"@.kQ>ql0.m@-̅vZuJD0صFRQNCi6 jG;L*<аv< &Yh@׭ •XX_Rh`-b{ ZfH&]l_[c5]lxlcp< tPbгj/)6aC80:nSl]MS]x2]x])Q AXųn9_%H-ǵ" Lr- 9&m6` ņѣqF0%ӇB!ɐBm t 9%]l .\qA!t(ikl87o0O P{CFB#bښUW7\06ƭɦ 1ƆyJ0'ZI Ωy$dsA0(7_gU/E?ZLSMz'2E,NRi DQ0^@Vl<%u@`z]|ж1(d[w xH 8)m _nd9GF$`2I@18R8he W?*ӔpѤlA23i42323,6"d23)Ƞ8`c H`6fK23!mj??z_lf&Lk*i 3M/68 #BT-aYB(f4h Ft0&Hdtf!vK8!)β#?vG0W |Ԛ.”KKBe/xsb -[X,NHX"X,_r[\ud`W5>>`,SxcT[AȺX>j:Z ;?49g+^/C\](Ofe?)TKP p1*Qɜ,< }:$MIa UiGl~X9&G9?~|Jf "Vxgs'"Opm4&1/W:y&|"'M0'3BXG1"0"EC%΁$#јc?$N4Jc@<BdQ41QZ0=h0o>I7e#d}5Z%OpRӽME!WRf&5Xo~M6[XK\&N`nwW/Moo}+|k6Z9MWݿ3ؿg3l ތ ̆#㏯ iBQB](B|Hb w U4rǂ ]cܰ WB6b2,aLP̑D=5Gښ #]]VR' -k͇؆]PÆ &abBqJ a]2P2~Dvh}11 vݎ-k̅GR[+KNbM9՘Z1K$4yiaeX^j` 6zkx@ڠE1ɐI.Cvq*cd~~ěIoaOm^Zv xzɗ%H CTTW*|:aMQ^ K+^sS_(%崋;;硵 Li,-uf\w w<@^Z I4y<؀[uRZcU<$CRL8sPКg>b?/.gX&VO#ak*h(J{g"'-0wú5I8A׀gM$$:dբ20D**+#mNx{@sFN{Px6Wϳ^SP:Yf{Y$CriG\ܦ2{?fa8°t%e+8'5EuQ%n맸Ÿ\n)dJ*V(v.)nR*G5E7smqzRcדmzYh8a_2$Il3'?_ۭMEу>Nu泴Z`r{?¯~@@QDQ_,TKMZ/A+o"&IoZ h$s$A:B/O@e=x`l}Yzn ղie^3ŬQ/3{nLx>ߘ ߘ X| 8I,%O4IDhH%1q">&H E?7)ٟ?LiU#ޛu_LM<{q/Iw#ӣa?&֦ ӱ%Rd=$2ƥhonL}Ío7Xd9 CA&k Y2dEZ,U z;{s8 ٶ")y1= lFӢ3)sS~o H>uAqXBzB0ҝ$+6+0 |jwbuv VMc݉RH=ϕrC̦}1Ár'*Wl4F';qQui,Z*2(^bÏ)t? [sd7:B Wܔ'ojњu?g QUJ tZ.`iOo USGp6 疋8gw6APN= ɍ1չ>&hx S].4.1ŶI|X2𭑸yW&wT) 1AXHX$!1,n1"Y$RNJT^5P%d`0ERUId"yw^{-5.,WV b:4KH&Ev-ghbn1ÔhD® 5NΆen:6.5rPWCCS%#FZKO}"p*pb "/Y JdlZ#= SqV$'}kӀswHI⹧ |%sui]Q_"3V؋DSuv28P2c+ΌՑRԣ)ebRn"I1R$S6^YI͑-쓣i&)܊onζR'5G5QGsZ(!*pɤ&q70[ۋj1M]nL+ !0PکOE|U Bu ~#URMk0D Q`0eDS1T;k"yX)b'kTm l7}CA˦DD֐E5GEȥnn.;ȮH4F kqW)pRdǥ r6+=Vmpsv8Vl.@k-:d'MG&/7_im @0AO Rfʆ]E94 jLO,VKS؀{-d Xmxuw&M 3ثZ[-w=0]OTc ÖA3Bxqv p*Qw 02,`RnQ EA-}O) RufL'vv!&'HxA}n A|=a eLaT"j\%7Юk-r7RN>Hۯ8bHVCӀƉ ŀejB5{`2d9`s:@޻}u~ @T `C6w*7*+tZC'oA$N% B:Y}13Xmt 4D ӝn)6sH1mQA(b&!N.}u^GF<:@z#6)xk0zAC"x[ɚG5K}?Eggh]STam5'WswUynp=%O T R42n:武RdЪ*8G.%tV8k4m,p1WdEf+ S`D5vz$mb[SU%mq"p1*p!6݉lShk0lZ[2F˹1p.I|O.ξV n7QDl&y%^/t~%u"p\(nDK;6AjNM5kugx4jh(ZDnp"Yh-FQBpߙ#b%\d يXY Sv?|ҭsy<xTӀ ?b[!#sƪEIxwj{!lTBNNWĦ !mɛ^lDVxbQ,@9DU\aֺHy%/5w*(hS.\'Mqb.lf*p[P,[l]=&>Щcsu'0?H]ۅft2` }+6Q"#y76 17J)]m<:hE;HZa<9Aw$'ׯ7;}O_籿yxٺ|6\. BM͵X_5)B` eC{*mz=@w?f_Öth>NìG-1o~2M\k_F+D{ZLw: Ɵ#R+N#.#<[^y޽#[neKݏmƪ%Ȯy7!`ONZKF=‘x¹G{ijW# ukֳa_g/W&bKgBg&w7;G~v3.})#])o> "/}%l. gͶ9P-xӥoI:aaBW;/;'.m7\z%ُfϡ>;-H%s&қsRx|7u,h}OaѥsYyf;OtU~iRYOO`3C!]Z/^>Byߋ?/v(ࢊ_oƗQ~we?ҷCK-};^~lZ23uj`.%b.qmhDv1Z}1rxirfޠ~~-NaPs:kAw6]ya{0mqgTzi:klb0Uk *SqUmT)NcϘZ5v>NB WT$/isخ9!h ".X|<!ŧ7˵`"Q 8:k(zjIrZfu4ɾ6@ױp^FܝYFgFB~wL&|N.u?^ɐŏK|7姧0dBt_xO>|Z=~95mؓ5p @"xkP?kmh9Ґ>N_xv@/gt>mt.oafrxHaӟOWN#->Ejmuwn W\ 8׶~7pR1[}vמBbBVbOFmzD~-՝'?ڒ|z9uҬewm3} R&X9"<$X VCQj>HUf|5HML=֊gs83YkF jaэa՝Eӻê3؋E|p]v=OdSWL@7! 8D.7c5voQo|"PA$?؉g|wƅ1E0(N; '㙙hPX=i HʣO\SGwIkpv˄&m]7{GV*ve@.=F w\vTbPuwE,1-i+ӊ.<%`a@u7k^_uaݟ?NJ~w'|Qݿ~w>5wϞRyfgyK }lN?t6/+`JEd1mґE[v!MQY "܋wM9|% ^??lV:*݆$%7J6`|UFQӹ5-u#uY_|4y`GQ j7ΆVdW[df؜xF䝚Ǎ݃< L*yC/%(I;w^r);q'˙pXE#OLgArk132©*s h)kҘtw\(48!\x.Yt ^GHK` ]G[ap xcf0y#קo~idOy>_/E㛏LJ'Tӂڰ-te"A[V+`eE"@`UPp'o0}(_XIӎ5Wd)?nPM)䶏'}<Ɠo9M:UJԕfiaH+ZLgw5Sjѭk_nAn>觛~70t//{Zh?~=?x|N~&mIkFDY^oAb,l^о?DļbcC&0 M5]#UU]xյJS*סq$IITQrjw$Yxeӣ'8Neu9I9قqN72+R[a ]8 D_e:xО8=qHP<2Ns$p58$Z]nk +H ڹcg)[eıN,QWUY̐IsE*\kRDdCb0ZN)S"GoRgRAX^Q'#jcQ·Irkx;@q= ϽP.B/F^"FZf#?/\lG-oh'2ws'6TɱC (*쥕H׼p%dȪ"SF9%$9I$iu佨8HkkƏyEE>nȹŴnEN݇0r? \+氿W`8x:oay[Kt$q2!󁷊Coհ0>DsB=ڿz.P7 ûO FbEw3]8Ьͻ :92.ԝ ԠFUPNLy̝jG=1:z&,ʞCe> OMўaŚug+X.a.ʬc,Ge׈Cg Y5GFi: '4T C{6֖BA$68atfOX(HyioH&5hkٛg]z`>:rn2ft}U5S+.;%b^vMpGÄm[td5V+R X,qİ]؂aOo}+[aU;״5ʶ *:B)_)5-c 'aS46ᣛdKJLA=`<[ϾMmt x__uk(#UB"*5vND]C$+3uB@ҦlָY"e,ۤ֘ݔҊG7ťG> ݔY00u[bAGƽiY:b4IhYzi4-*Ty߂8֒v|y"Ȍ,&+irvƿoOgNj1-Qk^Lrsmf WOȕ6uhԬ@@#POP[b:X V!YP Vgge/*qa)%7̸@S1pTf|! o*~F4R[Ufh8ȓÇWFەd%nHM%0[8VF=dwx2*\loFW*[R+D A f֜O,FkU8C7H*ā(6UziecN[[C@LqtX>c%H(Hd$ddR7E3bŤwrl21tF |.IY$F 2FyU >YtYGÜDKN֠He\T GӅ*jMEEfH)4-l䵆Vm,n+km6QX˪}_[~|dp|:yʡPb_Be!Hf^Y\-^Y7%T>%m>Xև?~_W쬰} D4dpy,zh4,*`im~3ДJ*4Y+R l:xzXdr`=*o]x݄HCԎtVh]ȁ͉gDة9XyS{49]4'ʬge{s VQ͞ar~\J;Jxf8*gOs2zb6r3ʠG@'f-ą.;1%YYvTMY th*x# GwPK#)Yv ȲN@j&[+.|xP, ˺;F@ p:z bbHzj}+[mJot˪kl xA5v+F+dOIմmt-;QgaYS4 5D6iU̩&Meߤ_TbbW ≠ISظU oU۴vDՍX-UҔߍS[%`cX֣'`YVؓò*3:z;$ZxK +qklHCˊV/{OE'iY\5{pp R6Mc*c%􋬴ilBGYiYSlCqX i@ܨ2}:EWB˜DCP-5u:DTlLָe.6iM@YGTKqcG6Eԇ\$g1v<,k3RB={q 1´tvc& SdL8/D S>0NKjfl]O@*{ee4jVZVO^>L`9?2,uȷglQ-b/J]Tfge:]REa'9Qv%-TҗЪK' 2nZa`>p_1i__߄||sm|!7Xoƙ v& ?VH#W|(lsےpF^?{W)ղĮU4 SE\kmtmǮb5AgPqfY t@knyBR{|Pv6s U@ ik[Z CVoI vaP #c5vNc_CtT^ VdTFE UluBtP m)?bQOVB ֦vh_uFiԺub]MF5l*[˾:Gy)_wO0Qu5A^p4MhjcJuT%Cݬ:sڠWv& 17d궆육w !NYgYu>ps{1| :'=/Tm󺓪 TW VPchNڶm=·W&ǦԵoٲ;kL.dz\ņD{)^¯5tXg,7S:ҾA5tuU{պJcƘ<NVt1zVYŰ\7MuK!R ZWCU!娈tHM cHk]mo#7+|'Y|)2|ڙ;M, I${KnZ v7Sd[BR%yGD\{Ⱥup)m Q"xv!pHh 1MVrP?KDbHHtHq&It(pL$2i}SlhekiA(T*p(AQb,9y,,2rN ,0e\S^PZ&wb>V0s0"(Q@A#&YTӥdʒv)ƌ~`$ HbiFltJUnCy֊tJ$a=G\&vA?CӼD.Z.y1gRD$?IzOY=>аpXαS`p=W9[d$*IL$r %MS8FTJe씔H 9}wư*4^H&#ia3rVr4!ìvk+F &YJlr`IA&΀! Hp-2-1p8 19qZK!%Ľ‡\ D2tޓ5Yٞ W&ek \# b S[$ fjU#Y#Idi ,HDf`YBv^1p[.N@EQSxMFIXQU^dghD-:K +^rNX]ZV^H2V!t(ƲWZC U9QUIS#U@ \ȃ0MbH[e+[^yggJ%)*/WK⫹H?3pI'8BSx( :.#ߔi2JUHNу&)8wQ $APýMqcu =2H8(fS>0q48̆Y1e3Yf O,eBrq" ~FnRyok%_^gڦ81# =F$!uB;d#]P !1w$KM6cy="H(Ԋ5"ƐG$I%8҅D-)(1IHi`%χ17*k! ^}ee:% "6)WMvPi%L-{ߜ< 0fE>RH;3 9U1]4@^7i͸ +ƄeX2u:qBW1g0MQ04UmPC{5](ƌ~F؜ 3($q1%D+e+f-ĈgIcccL۠iljcPXdYD΃.Gbp}VT!OȀ^oHҺ:QQȭZt^.~}JdȤ^}5R\v wFmWI ?%OR3u)̂0BGF:L@8%|B_vpVї:$,W61{CL-+_JZpDh=Y`TEV+FGA:&CGA:CMQVԡaUKu!%jU%jfՆduq(y; jՁ_jTNJuqV^֡ijjTePmקkKz-p4t++JkUX\ M i9ݵ񊔠8ym03tIݵEL4(c fG(xUN4Zm/&GhO='x3SbZz&d1Oģ1ӏ?^9矮3; QJ3uDyqJ]eX|oJYҭgR!iTyqu)mA]*M߿O/7)7OԹ5SOu9 {M+{m˿TA;]̷6}^>e?h;=kh%;WGs߳bZ[}FrρJa}ps:Zc[0u׳ !VS5柍w3rTw^nw漭KMnv|FKHfg: ݓ-=~s1n>/&?\|ͷl߳+F <Ѻc흣uGfGA>(>݌}#zJ@}f#zw'G~t)qf=Z/a;{ɇ}t*ua[Vn5^Lk>Q/"n2Ѓަϳ~p:c $=*NP &<* ܗ0hw 8'83-ݒ P|יoT[8^: 3wVpMR@ *d0zm<tdF33ӄ,f0UB)pYȿ^8wA^:/m ߌ,xYDJRh@pҍu W`@/>bpu \%fF6c&B$:EΤFM =͓74q.w>hn17")pRI"CF7{<MR1 K1\rYdroqzUчDzUFw%js5B^P5Vˢ+|âA2){.)JR1(+~H<QV A<2/ d91GIIL6 LQ䜶RR(fqN6?Pj@M1%wd]޾w}jZ,Cz]6W4]SCns|yja:纔qiﳋ6 e7I^.>M7仦eTd7*47סXifb_ek*簸b]n tAlM~HPBMʦ:%Ucnzs`Z N3a&Nc[ҞBM6E.}MZmxNsUrzn'%II/_?ڭ?y[׷~]k4׷ϮAyEMS}JԣLSGWʫ|;1_|q3>%G&/oJIs';)8uik>yZ,/ '.)B0ZNWVjuxK]=}yMKRmg/o;Pζ:pj=#V$'9gsNΩ sg: }m+wm#}vt!uy-s>Ⱦcrg3Ax:qJ~:kCcsd)s_i߄hx`{sٙK"<-T3{2N_8;q֐ <<3tIs>N!]ԣ }V}U3dOD*U'iewq_o;Bp[;lۤa#Y_K`8yƹy i>أ ud(7z@}.> ̆{liaΎzOPy G疡wŃ'_5qt|x_,~sҹUjɾ|Mwu'qO7ICGV>luMՁ HuJ-UGө{ɤf 1jEbH^}It3Qm+ۑ+9B%" ̃lX_,cLݴO6)Ni)ݪ]5B3OKn;r3k],Z.^gܪl z C?iSѳBgZny $%O}8y~9s]z^YMܥbOew< /şYsIϣ|6'zV[=65u1gkVgE<_FCr@f͊Iʯ:.'ឺP3؁uhTʺ@9Y@?뵀z+owuP[|s7j $P-.9j-&g3K6O5bRxfyyj}&>S7]%Q[Pi!Li\= jLSgR&79jk%!atşD=c9ntE[&Wl1RUIk쵇!ZJ'@M"Q!YK74yjԾgh-HIԘs=9xۤLTg/NW-vL|.p\^<ԠƗwZtO]xC\I>̕HpfCPn Te Jk$h=f0]؃!̛ sV2F]\ʒITs#awEc:= /ۤ4jJp28aXLEp m1!$Y!u:Rd`&KB1wN ɲH.T-k˽$mwbow[4A0 A,&o̅58zĬb%=:"Grc+ԜΖZ,$-CK-٧^!cbbCѩܩ*vKi؝3nuk%BN٘oAᖩO-Ûu:9 ( "VK!}>}` m{2wsFԌQ{i4rxCevCTRfoy[Fj"lw6x&ӻADl;&Mã\ 9L\j5ޚ z,řl4TU]/I%Y:{0e!"OC \=ń:&b)b !,v2 !IO;S yW4cp )V"hhI?#Y'KtGsSoRA|V%~z)m0$MVwh{o]6%x>"IKRDbhP[DCn2%8< vy쩦%ը9!o+[FPƖ&s)֚ bߒRɃġi"eD7`,2=ȶIe:6Db4t1D$bp#AB+U=~ѭwĠ:[}`]erc1V֊ޛRWDb  12g74_AӓG&MQZ!Z%12J&Q.\a<伎mc ovr4^ic•V-ҕV-vҪoZܲj쎮j1V-+Z{0WZJݕV-͕V-kZЕf;_iG:oϽ*ȁ Dz8ם^Ud\G0yTBcxSu^>K[ e]w]ƪ ʯ^=<]s|X?/_ܿxWw/ӇǒwCx*-g{_s"z:HQgF \ r-xq',/Jr(Xf|턮 3ǐ9R#| qoh/ѳ&8:|oz>g+M!@rS=dy"aǡo`{D:E0 5oIJӚ'|Y4[EQp.Ȅ㑷Q8'_! ȈB3ǯUMI:ɺpfSRg8+-QCb4LuqR6ؑ_ŰEDDzsfP2z"xJY4C--bբzmȆ-ٍ2v+03=3&(!ыQZcb6 'zgdkՈP9 qthl-Au25gv1^/ga]=//>{,ǻ^cvw˻enA݋߾.lsyME*0ԃ5?<<HmOn6>|vUn.{MÒsf&拻] hwO_j?S: rNqKo onc:mnەՇM}M]nk0):ũv_=R7kۃ̶͎qmWU6]̻M]nk0):):w}+5vXPf` uf8Eݶ[}MSiKm \9Eg8ŴcuSE6;K3) sq|b6 t+~H݈مM"Lqmbtu8k,CSb9ۃ̶͎qm 6 ruiKm \9E8Ea;MB؃̶͎qm+stm+4ХC3s]p9rXU%ԙmuni~iKm \9E8%v'ͳG67\n`6 ty6ϐ+б^̽Y1X|*lwLݦ.>C3RwHݼImr 96Nݶ;5 \6 tqCSRIcuIf:;&8it⃽\m5rXpnAi,lۆPry.U5rlrnFu%ԙmuhr;B.U5r{GE[G&]l1RCf{a; NΉEHdThϏMzfΧC&$q~f!quped&zMƞSsc\ 5Ar%L:dmVK+Gwyb-;Vf]qL6pݖ`$ը'yl"%bz¹!R#Aiw *f`i1T78R&Ia }WH9h4c.&6+AkrtGq-&jW5vb*;s6tlU,+}f4:Y;ԦqȫXm։5qH˞ri"&T{Qs ZQ T!n~<rd]&z_tZ2WѹHc}Mkɛ215 ^C?׽A`S3jR ލJp@z2$\\4 k-SڑI]g$9xY |{iwi͌O굌^B1wNU_I-aÐpT_.Vg`umt#fl/xg47+΍.]Tg` uf8EݶR 媌@ rNqJ+_Gf>mv nj}U3M怞o!WN)NB~nwΎt&8龧V=`K9 t}O+0):)6q+iOnoer m.l//@Y!WN N,v~nι1Xv\չ.aSus:rBN_`M~xO?Qo*~Qzh7f{M.bMo{ݫ7__- >=ޑ[rp oqrK 7pǚ5Yjo_?^^|67(n 悏G=~_xY^~eIʣWr߯",ޏݓˇ?MW[%3Р!8p4|MoǛmI65roG_Wo|{9ذq>~hb2JZtfуV+e"d"p7c.jäQ. o>S箥D]b xY Yūh6QO+7nfg oPFAVvpg?wqd5nn,Ųi4I9~ sD mLV!.œb{U rpp3uB.5l̀x,l77˧]79la OEtGl¨F7MfFvpY#[f+D\eOʜhrjǟ~Z5C.߈PNl:Q~?ipjyVd0T|W˖5E+6 ӏB]˴C3t{?auˊgMUˊjuWnY+,Qܶڻ’;-hٲY {RJcM؛HY_]R_T=6dKR*HOcF")h07.-K;ryO;ex%xIkzuScsD ݪMH,Gfۥ`CꑶDdjJ(җ"o-Q]Vyh#E >_tH(;)f&(\:%/*F:O)X)C=jώimv0%nDM( r%՛nHe}7 KMQ+ҺˢVŐKVlaaVlDPFD %7av[0&'"Y6Bn%WbBw,1BaD_%=DzbIG#uٜ!{Ou lVZ =֯D7ø:U_2Ý=!vki*jbw.KEKjW@N_.yB9[d4j3","=3'zK*p IGU"*zBLū}JސEvLOL푺c퉚IZbpuDE:fr5KӮ#љ.lƆEt/Sbb( '%F]1ۀ[?(ܑ]L?njE:O]Z޸Md EZ$aܛ̦=/6O Ĩ~XFD6F(ƍ㞬?O5]'u36A޸ڡ#qqMQ7N}OzE1&71=fxk~&`,` Ov#-TFoX5֓uXpEyˉBI Bd!^3B/GK3Or`c:ڑ<'ĽO5c-|zhS~2_~{p:<W/d8b ךy4I09N0d8 {Camr&CD8 !8\`5ąCF19?C俰7'zdsXsDCTS 5ąC-xSvP8†Bvd67+Rxy( O_Fj6-'ts~/WyEZ"Lkt eȣ`&Goe7mz@ٙVhUksO(&O(3o9%+F壯@2],p<*X;zR| F%xGXR^ظ 2h ! :+혊un<ղύu(AZ`Tmםu]po B5b;Y(m$샟oDA۸.!h!6ދ`=,j-Fyu4Xr}X+>b~ki~AO"22٣0/W~t _- 6մA@aR.b2B NA&n K)?ۨ%֌s0L]җQKZ9fCeWz븁BEUytRu\5%PsJ:mr YjDIڌH11BS/7Lvn B6UfT$ͻ;p4sR3Ƴ`!n(ŠCeJv"ς!̓޳K6b7? A Rx9B8dwEd(ˁx}E:d1958׬#t9"EKgJ8X  J\1tN'<\n&ÉCZR 8D.% 1Ph. F}X'@"^gܛ,0ۛohX_G{@R hf>'?_< Q&Ga@tyq߉ }9)pm7L< 2yd(7@t9pdD+!(KIS*5&y0uT#lCfJ#5=2P,k S~TwQz9 7OF> U6h<[}͒ڃߓZR#%-̜ߟ~Z2pR6jqAx r~:trJ`u+AV\RYRC|~y=>,ٜ_Ó%mL |>(C.>B j΢EC\1i^W0'[7Sxol15έ yzz-KMP$ J+K p^tz{^ Iu,xxֹv3p,\OJxd?6OϨ*'7.0bOw̺X7~lӄj\ۻwzVWO\Um\h Y?9h>En .9zEح,\ZS7̳$y%o+<|mtyZ:1{Zc%`Pf͛h%ԡMUfV/L1ͺGYK"V7 Ƙ5:psΔdMmp3_<]Qw_oTo XxRe*Ԥ9XLBȧ "gxMQIc\Vnh@bFCt Y0Dg ѝJA ǚ2R!%KZPaG2|/X.F%7Fњ?]˨Ě4en };KΐOXgaiJ6G4 U" Y$DU'6{Fɶbœ" vwPHf<=\YBrdqU;BшyA!gzu%!H6rAZUpt\S Lm.4az2{؅.Li'S%aq„rXۄZ%YMZ-kPc"O Or 8#Є!b$2(5:uUM(*pse\(N9p5PK0 qK$K $J(0Gu.lP'aPR0q BBB, \pJhTc: "JW''zL erL-)aGkA"M`m8X?`$LJ M ). r/_}" 1MCN[k,[j qR\%Au 9HG uu5 Z -׉ rM$:[TPP{i^P%SS!ɾ!xU-8H-j1Nu:,A)"5˜.3~؇sfUޖUn FsryŽzM.jrfPTg"Λ : 1rh_8Եץ#B`__%O샺lL^88u2/յpcB"WbCvMT!m;jwD Rj7q\H]<(ԥn="Dflڵ8K08S2`NWjcC/+kFQ/33U}tj١ZaxX):X$K*CY ],0%2?΢kQC)>GbS୪N^/+;!yD~s[P8@R5؝=wP"ȓ3Ւʊ1kF~ l_NGHhEyw(~{b_+hv -1OKh #g^foÕw4rL3Ol[<\W*^#߿ӽ$EmfhkoYjD:mqaiN _@YgOWbx"ϥQ!!nF Tnq*G[!YEy㲫2\]}\q-A ~MY `w*J`ME_iS*8jvJCCB;snH미5_9 p?Oa]qWX&esBX!_vC 8HXE*ʨuҏt9K{U).5&R3A=R 02ի9yF [[y=6NPM5B[rFy+'j36|VN8ivqzh/d1 &xOm}+Qһ{㗽".3;q gVq|3|{a3{weo8Cwx_%W$P;KIO^|'RϝK) iK{\6zFk?&kC{̇Ә"_BάU%5Z7W l;XJYusfn>F3Ncʸ-I[uL j[BEJ9ݚЭ gò~@⢁R wؕ=v׎M AwJB܀HN !+H|ƳuT NIbFԠ<{*66a^~*JE$WqzdW'ω+kwe7f2{2 R'=q8 "8~"c$Z2VR.ftp}4^B׆ivS/{;7!{L5c.qS'C5tB0(t" Ŗ6uffzUX`Ljx5_k+=?L92mpa7^6)Orf6!k '`5QAPMd4/?髟wz;Ao?s;}ۂZvZ[zRgFGr݊kգ,N.V">%r(͉$33.!. pΜK*4%jpP~P#6#;2@6 2@OB!+2ݞ FJ MWψnl4uu᤟}gOI^ \8-GRZGx9Շ[9;p1guJNV/ly#1nzRk{ 0reAC X:-}UG N3Ӎ?![OSgbaںYgOWbTQ$;#do.ļPcu/ZZZjש;qnd@{,deqܹp8ATkrA!g"u/.qB.G VQ}CȲh0EU! %BI8\2̆W|ēSt,b6s&l8Y8HM:!;e|rf(.Pï ]33Fq}Y=𡰄(Ry0=g kťƀť&R^LjgvR3hvfw(.5g ť乤vf7(,+8K\#RІ8WP!ťfڹ c/(q./,.;qL< ~,ئ+LjXv(Kp~BG7E2ĩ^kGr{|^2gT*8-6ptE {'urb6+j$w6t%0(tk0M#^]jq6MfpA)׮Po M-fC.(b0eċb)2YLQP'.Š]Y-.(\=x[Pz5^ :b}Zg/ڕ<5kTlN6_>HU~+]aFCf_6 Z&S/.((ǂS+>{E0_z1_Ÿ~C.W}@F,g.(H ,bվB8 ߃ Md#Ae#٤}a/Dzsru9CX* ~~({0R:qf cҔE"\3t$ƵůsBZu3ft1Wm.5cǛ"ьQVS*!\SB?ԣI52Иe氆4̇ 1&yʺ=MӨ 02vUdUe^Iu[2MJ1WwZl&᮲>$Z&캒9bx,r&NcyDPؚfb%]D&2En8T3]9r""Jw:FS>$h?P5rBIEa*/q4 .Е5ϥT\0m\#FeC+ RK( "c|%5v}2_7CHK꺱|:cy te=0NQ> 9D_aGXvwLMpԦZ?N+Fs5.Rҕw<+7.p^1fC9nihzVv!i'vZW>t''EiHFOG34,ř3}2Vm]a;oiH/>UBuEZSX)f;աYOC~B@Lx;ԅE'`ЍtDIFsAyMZ`Wj}r[Zb% /-*wo_LW/ȳszmSIRjugV9iIZKL~s7ݣ_`9o瑜)xbIٰ ݯ̺? (mkOQ0irtP{< @`@(,'P?*'D QƐ&'ɏ`  w $:˼9)Ҧiՠ!QKNw۠ RXvYʀa3;%_P"(÷/ΩnQ+3p۞pƚyVBPݩFmO>] =ZuǷtpi H1޽3laCڴEUo!{ب.c<[}=s@Pcp}Xy.3wcnZ;j qRNm8J7:3᧺#5/ L/!|߫b?•!,HY) F`~;J9킾S&()?}IT\U&]Ap sKfT[EbrX D9GDyLs$};$n_B(c`'TP, o]ʰ6M[@sT)T_e-~Rϕw< e JR b['+lzgow'$d Tt;l$|#of44l"aJ)WLڨU!+:@|A%˥:IXy3vB骳kZ(Fo-ৼI̝Y&ʢod+ !#ӢW#=lW -ɦ, 3"2d Ḅs?i hsۨIb}"{i_PH;'Gބ"$}Vr0hn\ٝшgGJMwT)֦%+*T(϶RtaS)˱ MQKKXRhPԑ1 K0Lz2!6SZ̔| ѧ ~H@*~`5QPO.;oA{x l[y_P!bDDB0PqD"B%2lD@DT 5Q%aB".nkFF豲U \_t[* )Q3"m}P~g/2^ Ʊ f& n'rK oOt|>#`W's~@!4?;pv<>gEl^M#5Iإ7{Ȅ䬗`KR,`ԱCH9RSw!!?tbR-0 ^+4XTYȏal8˚`ԡ5I< ask s]J(lMVc[ k(b@\0XjxTnOȯbey5ɏi :'y\>?9__%O)u@  ~e-:?D Ԋ!)~cX>Z@d[{'W y̰ڌSYmfE~2>c!LKQ:ư>~.P6n˧KC+YAxJ  wu?.h[V3`MigFR _* tRy y6m]NhSڊ ZQmfq5okj4ȫwm$´A]_E}㐦*vht, Zܪ@G~Jr}g"?l2{Emc'a],%VTK" , YKߍ_F=E$ ĒXԯ`:^=-!)[[>tZ,cݣWͰpyZ۽;¬l=殫iDڿ1Ue'!Mӹ|Ua\sd^&8м*W+V[71I$A;4Yzcrr8S^JXܼ6uz FΘԲ?Q*(ݢTrS*7snP9M0~KLPwg>C7sS*0Ro㖐he)DX.RgH*p**-TrJ2_y½70pǢFpB;ª* e5l 59*p8 x*$)p')3CTQ% ͹X`q 8)d*V1gs:9QDD,G`3L#2ɸ'{!ad9LTH6^PÕ$X#`6H$ ҧX8EdPpw$ U#9xb4( PLUl =x1ISL8Pda/J¥3ދJ#Tk rD:>=94R>*t$i+r.݆1̔ PNj|r+ЊS{fE%|%s0~Yo:߰Z @Y4"5z=I.zw\zF׹ݧM$[lz?f@Snhs p#-ќ3Nz1_AUo)vC[N08[ŧd{ x1@I0<q~|K5,ڔH5e-Uѷ%%iK}ZR~4ęI=8gEOY;/F%xNW Lw/kglI%s[AlbgQE fA4aF'Ôt$#]|.qu>?4NzH {@#]u}bi}߽K!۰޲r"}niI۾]jūۡ$Q+)jNmZf'Z Pfo59MZ;B;&SZPlZ鎆P3dogiZ ۡ68x# k3r򏛲M6Kݾ}Hk"F7\4̷Ony>9}Jw.c @:wb߬K}u^gzpsr/GDo>&|՛Y\!؜cݦr|)__b@VCgx`~!4 ;*d/9oHGm #zsh_ͥDdo-tN8sk:Bj [j 9!tl=*9-{SWOOj{E?,!>wn7t؟w(lO)@7Ci;=v֚vxM-m>5ZJnrF闦.?Q-e6Ok% Or;1oz:wyqqC~jx\^ GZy9h1Zs9h71[^ƿ>1#m$g]'zwU?=BP1}yA14 #޷%г{ӽxio?Rvo3S/G\|~w=znijk`/ĭNgUwv?öDB[A L;ztwmȘXxx~Gξ:k O<ڻ1RY$8#D-AS'{BA9-',%vrl[hΖ.}Z;=6¬h2IJH#km[}ՊZnfFV7G,\ug[ gϫ^>lyUx?֫O]CYn>SJnZ^BMNLx;fysta?wMbvkYy\T&E%XT)}Tq *[s|Dr΂#Vm3VPhƑ-a@El KUHQ.HE&B}ՌJMmU씼 $MyX% N(| zJT/fpk ׮dR6][ηfe7Oy y?Ȼm | (bHkGUS#0[唾gV35,: EBt#m}!^'T+G (תϚRx>r_RuPi#i!k81JKZ9$-JzcTRܲR~MRӽs2(,coe11H6yȭM\k TAc]Oh4tQM؋-$Sҍz)ۮl(w|Ĉ?<A2Vgsp|(~@I{u(mkҞʹv'I%E"+Qp:K1b!h9`IK{ԶmP[mP/-/;=^†?|{|NoTRA,ONXJ{ΠZUf\Yvg^yORʒDs,3NFsGMUHBDLY8"og.k8X.s@ֆ3F.2U}ZupM[Wt4\&1b: trq+1x3T!0$85 =YIhŠ4Qd?UW%너4zˬڧ*(*:Dv*e l̪ LX qfrK=?W9qEaD+.UWWIiJX֌Z=&5BvLE)jDo6L)1}Wm-nFjt2c꺯c⺯Zh*;JzL[U+;=&wTlc:l݈}Ռ+F?Z5h1b1hR}FO y-3%$䕋(Jv&F6IOiMЄ6n%$䕋(Fxgs <1h7,MD6mӚvCTW.(z vӊAa1h":onzd֒/Є6n%$䕋(JMڱvL47hnG3OkHnhBjE%SVHИ&F6\E/2ETW.(v9Mr nX mpVMhSVBB^dJgk7-G2A~v.+3T&v+!!\DQ2e3}5DPa1h":oneog֚5\Z mJH+QLY>#8L4Fa^Uydx&0E%SvCڍ+;Š`F W yjlo MhSVBB^bdJ1 c&(n4J&B~U`U?&q[ y")E_|ݴE&F6\uyi MhSVBB^bdP>GM2;S,ۑh2\ )Mh㌐E%S\ɱvct A )7ZݙحF3џ&i y"z SL9sI~q|7#߆x%E:C짽'O %M7BkݏQZ@&O*bRQlTS$V8əT$hjUp\)Ͽx듉_ז+F*&N&X~/ZR>4_4r=w.&٭7yblɶ"[,,Y,lz~JYԛ4^Ku8}# O?JfyxH]͛_ӳ]j_QBK-ʷynB+°jyؕB\|0!酲ݻ4y`q5xiYZכpX]"(yĠtObu1W=:qo'~l8 Ŗu8VԊzdEۅ_ ֊CmH|L Ő3`w `bQ)8/=X%I0Dk::q3|p,YZp>jO `HL̙sf&BGSInxGlByqܐOv5)aM-z#R¼F0!F!V(")mY2pTRײĎ1%Ole8) Ceܰi$8Yr x0SSX$@< Ԙ)v7ĩQ17ԙ-!nl&DZw TpV;"bƳm0m_@H,M!H6Lk|Ĕ5冈|jvn_%3>mvgߤmԦ0hfluv]2rA)gz߾s"Ićkf9KyW@J+a#[p/ߐ Ѯ,vaBgO!xA{ sk {X1AtaYd.|ssq1Vy#IB/3qyan/Ynm4TM&F"1*ղ Y_DFƑ] Ps@AYPiXiAN~_ESԊ?m$`6!`Lh]dH )m@^ V[9;b&< @or.^m^?L~򟳙qoNk$'$_EFT5(nf'tumH5@MO/j.W{ΑST[CcևOv z&&M!UiB#XS29-#L{..LF|նN֍!1%"rz[ɉ(j #7a)Vk+UFH+'Im .CAM5Z1*8P;T/IȵRi$qyATtw'S')ư@5R  bKqQf{1CGǝCۨ0W>߅b:KAom~d5?dh?RTpU1|V@,`ˁߴ}zJ3`O:X⑂8lh$'*$;JC-sEA7 Vt_a9*T1A?V;C°g'6*NJZ4tbVJ(ĸ$xSc1F>2[8}f?݌.]kxJT]GʳL>k82O4\;Iz 9S :.ʼn3[Ō ^%ET:>\4+g-EXڔL?)E@k,rTZio^Y,v*cn)GP0J,^,97 7 042x* 7|{=!Ъ?6.R~X"VD8UZLN pR#S8*Y$`G"‘,x!5Y FKbB0ɮ3ؕ~R=H ) ]VHuS^s?R}^XIm-* +r/Ewx&aPt~%z6Ү {L,ꢢ_d C'Aq=NC FBėbQy\9/ma鈪WƐ*Z  T8rAB JH@"4HG>]f 9#8W+E9oR+x.VUmw3vS P )XDw]?şϤ1$=RTLQ)0&a0I N4[Ϝ=tO^Yն \}9|0(B]Hy/Z]E}JMn홅Tåz x) ֥CϲJSCVc7꟮.wINY.of*1*U5C.)R eG-p.xM,M3ҼU(kW޾2YxoyQC޺_#snjFAyc0o|Z [Y4{u\CZ 5h=WWtjO5? “%!<]ȏ<(DcRr7DѲw+-7<:8!7!*6HtJB|3FGʊ9,wE :.|ӉAnn"J?~,AKo0mya, P"22bf#2MaSq,5K՝Jf & Q+f/j_}%0qUc^[;{"(G_Z]Z+$%^2k'חQ (IR^Ln\ %%}v9X SA{ҝ'gZ?iJ/J<-, ô Thdri+hT9XzEg}N&_X i'.0krڰ_7h OVc=WEXcnN;nq("(}KL|,hR pqU&׳ρ14 rͯp0$W@ uK:_ }˺.V@ LpiǿífP G.bZX+tG.dag{c\/XC"q\iB2:50۱Y}UlLyo|T^:ݻL.Rz5)}P0wܹr7sljH9JLLR u.)kJ&I)SbPUTσ)9')-T(ML9_H"r4}wqsWX{87j/_nN sxBP)9\G@vyzpZnƥH7en<6| ZrkS"2QM.=~%H$DޜFӀP6bځ7kץSyuP7pQe91FWJX1c3WN:C'nr~ٲ/[̎Ї/k>DZr߈w3#qf!|mx犗VU܏J>W< ڼ\ ,ۍ?ӊcT.O5e\I2y.xKX(.}grF[bǏfӃp4 J0yi >"3!Y=)íOH~"s]RIa(d#eHp Jdj#"I ŝܳRIy2ZFuFfRjdc񈥢dacX{F(KX {1"22aLUUFYdR4?a;e$R)FRWr޺RbG'Db0V`@iNPޕ5q#˾lEanLz1^άDnZ=&(6. ED_H$ %`'rfc NE#qW\;%x^_  j\:V=!˼#AT "XuLHYo^otV2셶μZD؆ Too()[^ iPB/.n]Czqd}񗛻Xag{=Q?^_~|.WQy&7a+0wqH1= muz~6iu^+B_[\57pr{oeȷ͊k!ϜEx \cAb:xr=v+hv!ϜExJaڭEU+ .#f"@WVߌHn!Q-`t!U}f7o #~|i+G-86E0jmcđ#ַ/X̷ǖ+%d&XᠨV"QIV x&:m`L.ShboWz-pH[AJ:bDy`&hɑIC0ҴD(Wz3BW,%K0f.Z.wj kƧmn)y+'TPRGDB V1>XeASB `PxBXs8xWTV/#SV[|Fωk:qH3G+0oaVm?ܺ!}{,l$z{X% )c#Z䄼`MrB>Ќ2fF=Ϧ(=V|Y/)*oVy gۗb-F4=D0o_鷯!ϜEx߰GbN7h3zM\[Fs[ y,:A|’}kES֣7.Vє՜j1;)>4"8 Wڋ%mm3+q=M3c l!EN0G}@V ,s,3#Q!ƌS%8^&uSxI0uʣBez@Y j9}(Zᐮ\Hz)R}?wE/]a(:=JpN-6zNgAКֹC"iɯIWo/nVN H쏬I^dWrqvy_wyV| 56O8f7sfy*xmʜ~t93ؼ@ͨA}5p{ }Ő&Z Wt,[yo Gg /y+O*1q^L;gl1a׃"sĔ>.X5y`9cL,A$9`w|c@Nm\rލrw9QRFD=17nr/`zʵUoÕUz<09u{HveNr-Ԭ dԂ+`c>]?$YBVVqDj-GD5J:~ynJN)G#r7>DiRfG3&ތ[(xs)Uyiߣ=U<֣Qa xmGE{4QQ嵐{0'OW)=G4Q qdQogjb gc$#"Q8g hA~#}1V\VUIܼnI'HPB2 &4HGSƐёTE1A>vGF ln%Z'7c`Z2xm6tE TUZS)@iH,8YjN~A; QEj};|ՄKF{ۺSxh 1}sjA,* bU^* QNnFX|V@!ϜEE`c]d>z ʘ#xBXf)=m}rh$h>-[H%r3'W ֪'R1ځϚiծ1edgnWi;2C!\+ld*i:4sib\%4$N@E Thyn: %pg0{r5{!i[x`¼1iRX;0+oނ_)'$D] $P+71`)6X,QҸP rB-JU4uDX$Im!4D90Ndj! ċ;8'iʨI4l񩄓6J5"-:տWҬ7H8#xҋaS^;K* 'WOW;廛]`hn7O!x5 P(za6`4r+`4oah?x1k8)/1 /svlg3Ѱ k-WY?Je>]e!||Yd?x7 }e'5d<-K@#0턉2=> (Xy#fi3:Vq˕dWȳ:v3 G!8zΚC9ذX:aY33ǓvD}W-\6fHѫ"ֽ;6y6Y-v^*r2fF"[뤓''e:#0Չ:~U49+8 &\*$#$,qIqY9ȷ3PNף'˨ݏZI.N* \ w>R+cl8䙳hOɁߋ]W ֓FU;nB[)9S>Dj:?]Fs[ y,STJUL(GYšoB;l[rtR1B\AܙJ>?wր[f$. J-H,Y[ɐ}lD - L\;f<9fqev\"u4{"j |Phpfc`-Rc(%}(@4e#jw;zG޿/J q/o K5ccB#l=m%U`9Oה%u=Md*ND K$|jč N幱)Wuz]hxW21y3T缑drxz_tXĦ~pEE0z[N`VyE+X1(~?7?|7ouua[]|6eF4:Y W'ڽ= +=怂y闠woYH(b"l3FFOcs8U7[Wݼ%W]I;"6k8 X|}W_3dc~]t WS_8sݬml 6 ֬JNVH~+X~njv|cΐ3:a=A:Us0pʃ&kp`j$3#Icc''EЦY;tfƮ^E{s%LNbn!,5bsNtAulGh.R~fY_~(T^3'ʁ;fLH4UmR˨ w)25;" ^dWP[4=JzQT[ DSNj'бU>FRjΐ^nR?h y].x|sζscuLLU`o!?cn/*]8/%׭zWNϜ+Jo7\a+7\DXJ/xMK;=h列2$vF}H\:o):ݝ+i'OZLKR&Dۏa"m`RϷ7w?' HmdVt#~Y\^_ͧ/SOVEͷK6džlW,ChUKŴ8*l#$Ut|l93Tټh u6/<b9ِUСҨciBш`| 56gimym cKN(ASfϙ ը =,줨> ϙ)fmԁ PR(0Fjc; o>ggybInX,&físfykS)a&qte:XIX#e "+Ol>cզ|^Zcϙp:S:8kLOl>g߻Z $GS#_R}:tZ4GͿZLW?Rfm衹]KjzUp/<{/.Kk^˕X /WCS2,eHdFD҉,)_%P"L dC~3Gq+נ| 焱 x4XɉIC`\~L?-Nװ8^ĎkX, ?oa(ji7L6XM6 Z#۠娷r=2`#[|ȷ0ؿc/4o\wW/<ȕѸb!Tu.GNJ!r7yZ0WKZqP$DsqFū%z*ث%9WK΢M<5/<5`}m׼PN RoA䚶~F,Hǜ3&i`*$Sp gl^&?n[qP0d7.]S+fK[-~ty,S4_iJ8u@dݶ9[q@YQrfA)(Wb\ jVPpδi +3|Ar|@7k J) ).j˰?tݕ}[Wdh$N#5&H <<twM*e#Cd xy7c5IS9D1rK(INh̕KqS1) bxf,g !.Rd_/֡nc!?Ƽi/YxcHu:x\]zx,y%3*۬ZHŘ2dmΘ@⊣@#Gq K,SȵN*8K' )1Hrz߮?]0qi~Q{e<*6>_<}ܧwy7ub7ۈhq";D֝<8NLZ\$N>ߺlɩ%;{Y2z)0Z N;˭cF xN=ߺp&^[n5=F#5zoɿ駏]HWg׉dK}o[qKtЊ[q+n v9r矿]|c(ƫ&<^ddleuzQCn:a|"/ڌ-E.5v3(-Fq"U\]H,hIf|HReFQFJ1)NmԬ l ;0Zm&/3`EտwŋwV{~Xt‹D??ga2Gp'{Lɓ*gv6,{~n1B 7Om`&k8(ijFMڊܐܜv3m y,SBЬǟas(QR<$YrC=0֊"!ZU'ד`z[)Փp g<1\#|0xU)#߸-4(cPhPZ/Wh0ŅՇgY,Z\Zh?TIE/Ken9B!b-9elS'V֞P `f]Nz^]ͭ _}yuS0*O?v{o_PK-A%#EP$y'wevrkx]Lu_V;.2AxFDOݮ%;_lW{Դ7zOcÃ;uxYTxJk艁/N_C;(1ں'`B6taZC/*=;'H8JdS,q3Ѓr3UvK) C![mV0 s cJ b 9Iӹ.rBXc,JMh2 x,ٖwғEX=lE.$;%?ziŝ J Rl9)Vd$T)@K`*Lb6ց,HF̫,ckn9I{όq@bm^.m[RaynKL222]ȝ3 H&]?>0)aB,9x8Y:F g}SV)Q ݑo K ˖?AVoCOwHno]GcÕ@q},)ťQ1PT K67Uzovٍ]o9ҮWN9ɘ[Ck<, k`2Y*luxrμŬP0yJvQa6(R-7#\^R. tGqk10W˖9HrnGbynAuhqq˛`0l%/lZ2-h.e=CeRRE;h, ƞGՌAd,Y':8cWK Zb?{e-{wǟSY'>jP`Qw"' 썓 BFƅxY_?[sb#5M}VosE5~դ/]l/8pm)UҭFhJAlU-_z:|S/$zp|Y%;܊yZo>§\e`H jt|Wt5LtJ)#޲.L:keͣ}\ kh3ƒ艚RJ eL)D_{nU/}5g` 1FrfN=ܬ?WBn `B/{]",}Bߛ Ya0QBfO/ JU@XB'*ZLgIZbS=/gKU,Y+,s|-i7O b-Z+J }ZQSfzZvkm y,Sjdmpj}%4*ÉNn5hyNyfz%B5zpoփaC^8>)[S/&N, N,Xub EH8usNuJX< |S/%_;|GW/J(q!VaHWdUV[P:mRq]ph?&x/St7x;-c)ŠhȘԜ FI>EVrt&1ˣ@ÓTUQzc ?|e%v00X)~K97N1 齫_ Arx# Rm־crOgL>,F IЀLqeJ"p6ctQ|Ի(9$ fY٨B,4~hdj!fh9bh5fc]JlUrgŃ7-Έ;|T:IL (ГbafdC3@ k1Ă N +IF鴐V #OBf}DZ9׭(Tujd! q-ۆmTsj6Z" ;0o%uߨ)]5B]N!/E[xJr9NH7O-H9 `HQѩcVT-8(ߨQ-+^()TKRZ΢yʢd X)q#߸&(cPMPq5AQXH,IWҚeCZhqiNzr%c](~Tl[QsNy: _?Wr2s,%I2` C!e({4Hœ+Ǖw&5ʨB:.a qrՁQgďhR$A#L G'Dؒ9с00n$pLz YBNLI9޹d>dt Bm}Iy}nG*p"$tWD*+֪kg$pb| "p8,*mƝSЉ)Z؉C(>-☟lE vIg/UH<3kˬ[dEU@-(ՋcL@h_*tG/G+#i'J,`Mz*@.O9rѩR=X+i|Y?)Kvx0+'0)U{!yfmVg/a൲¬H-Aќh /V쫐c^. 7'6PR?ٚ  qD'\#߸PKxe[r۠Xw`scوt8}{|wàkUP߹Fs4jistOӎ!X?~~^4$xznj%V q)1&|9 ozA#T9PD] ř^(HpYGMx(ќ-(1'4B!P3|q.hܳG;4ќ-X>/)swMEXo)˳F ՜-[2ZY؄DY$7RF)e:s*r oi4뷁vFzMA82C%cMdJܰY]7Az╂~z)SguM};fa $󺦬\DBss"B8iK"r}}25Wd @32 &̄YWKtY%pó$Fz7k 8r|^3Mlr8yב#J鈹i6z쬉uFDI*Bwu) >xcmXPkɽ7IRYP}Aj0}Y$z{6 5#6{ښ~f&Gו'@ξP" \I>/TJ9cWKRF9}O3Ntf΀߳Ť*UERmܨ1V!vM!RfF R fV}qjS 6%A4Bh(uƁ4OI[^/4ZX׋W %Oۻ86D׍.ø\gS.jz<"bpvM %v&ϯMihaG: _6Zqm!P.JoyyR>k ߧKLxۡfD#=J% ڜ~60A ;D$݁'juq|zԥ0}ocO ;lτ1 ۻd0a⇃rYf(jU͛-]GB"'~q7%(>ꈱ!dڥ\2b=g1nguyE+'Z2N?b='scbNV=__ãN|mt^CG̳aʧe : <4־ |頶z>2Cr/S#AwNw8^{`n'cLf2f hi{#E"cRe0c7[NLq\b4xxl%FVϼ9̸4?U=6v,r [ IB6R0|,;\ℰj9̳C7ZսC#OXF?ՠ!x~WPI K ߐN9 OكzS0XThp|udKIJDƗ,#$ +!_.86ITQ6:QT,na.T!\~-b@lL&,J#Ur!,zVJP@5C"V@/!΁1:q#C"~BȨJ-btT5 T;fz%FQ]w ! qɨHx&(cU -ap`pC#Y9=QEb`F (1$eL\DՆ@NIRTd%_T`,VAáL…A ;K( 2ը+fBf(]NMAC +6좣e 5լߝu\U` 5*-"l}HKbDfB$V B&i r@Qf>!CmO-)Bz%J"* "Փ0Yr Cr"^$2B'Rf!cȣH7FB :C4*0#~#Fh !$1t IZ1r SUYiKBdN5:E$:"T'$:8( rtIEԛ TKz> I@GT*c32LH9@tLkE9 'K{z8NEdHᢅ!0DM\8\GJCֱD'狶<75tb"^ZP|U\3F% ڠfmB=y.dap E"N/Ju"y#rU.T3,8/ T=}44+G Ǝ%V:$yZټ~ÆD#-tO4? C BY:~I8kxw^uɈIZOko{Cد;M\mS%7H9BxD5s=u Pp;Tϝ2_ϝOzvɃgaE3//`|[<v1 pkznRG-Wɫ{a8zU!_>ԣ /V;gr\}s*\]֓A띰t<}z d$]+nUviuͫ[lnmٗ5LA=A۩lG6~f%[|Cwj>d^X6 .9i^"t.W6f-3( ѨD)VG;ThMJNPlx[](K3QmݥѲ\v?eg~da/Af{ ?{Wƍʟ.UhtnJK.rR`0ֲKj$-)R3áT%hjt0j)U̒*oW|rrhr]˲ηKD> ^YoC\T6l@=;#ٖ ]!35C*(>LK@>布XቈDA* DM-d0]_Ixm4Jڛt-MSs#LC h>\^~\RenvhWsx{JpEAMJQ0PumdmaI~UNkg^᧋SJ Ry*\8p,oJq jFBWͩƆ42v*^S[W|ݯmYVrʼnwrqn|w*pxb_`XHoD5Aci &NODlI&KyM*oJ?y$i_y^ǻk_(W>*..nkc}>V[Qe1H$7[OELD(fI3ed3z$y ׂ')z$6we0(Lt`R DG]٦(օ׶.RM_ն!TKyfA)(qD8nMϪi=9(@l0_,%,E34Z 6$@xYV4`Z4 xA> 6 S?ȿxZ@e~; [t6"h b"jΧJ*'P-p).Qi=U% 6 ځUJN7^F+++W9G*Vkn":ޜ{L(cJru+O&[-:N,~qv/O寸$QZNݫ Js/b{FO#6$ǤWb:X EpS9.|+hץntc~_ O $^ju)eƻ@i4{WdؙX/ui%0u&k"%ẉOKeEInfSkՁoƷaNLG@CP J#NH$< 3#Tئ-Z1Uԭwost7<4P ֢ JH$kۭ{*ȼ7S4%WQ#%BtNA."5ACϑ;5asX0]SdSԮ&JQ:n6&NB l`@H#"$Ɋ!km6ՋQD 4H71:8_ߞ5!)2_N9j@K_뢬0b} D&j߮rUhH#WJ˻ȗ\Zm-H$E,|-ik/OxB E --*_Ǥ}b_[jڿ?J$3R.Fry# 0vjG]XKsƽm~XrnNc߀),\Ʋ(Q8 oכ-}Er=,=ȺE7{X]HX?bVx0R6! Kӵk5$ߗ9Y_siTdr{ZZc L[֐=qQArb>$H48Y&yta ;ϗ8M~8)OIxε3W4݊W4kM>+.x:f!bk'w)1|HFvQ=  x#/&JV߽?f+89) i\&-n z|yfn H|w+% ״/7;bWN|~b_}x¿տw᛻kK6koUs=71ޣүFmFOc-[uѵ>g'8ָ/W/8)Ŵz]7@R]Ԫ}#A! ^cȢȜ"m"!`}"}at[>`!ɫ#Nm53vX{eFmXևw$oFM^,x59i/?,Ov$tW.!V5x_Mn\]ޝ!]}jecn~zn`A;#^_G!tNiY#;PYl¢ͽq{zU2><ȏֳ0$-U.{#7,2f+8A侣wcttj`4#7 q>nL»ebd::mݲYS[r&fS gJ XN'7s?9ܗzN.>ύ_DVXv}ϑV)/& Am ꩕R·6Xf!&8ƚVTEi OPk#Ѻ2DmTe͏e2-eYiZwi[:eއhCc%R! H7+XFXB'i}p-8즖y+g%zE3vhzxsƱ9a}++s4Mr|Ѻ ` `#Pw$J݇vuV+J2z" ۘ+~rYFPߟ|< ?ZO[jwvt]|(K+4ĺ  Ks>W|q (>(Nt3A{jnkM߉gOg PK.1tHmR AJ~1*!lC၌T6B`jdjdd̙ ʎ v߮Da?wm$MD|>|aRAii L3@ٛ3f _vvN*!-~.`?k 3V')V0jrU ]eB<2JIyi;YW#UM)VI*.R IQ[3AYE52}:F\QbB5VS=W\h쭎2ۓك?"lv Srt҉b^>[yZiU(݋Vr`6O jz)Bk&9KO7[Mᅰvou?OwӥK'4Ev&d&donʛKnX<[(zA5j)kP[.Mc274\'Yӳ̠6{ݞD1q8l`ԁmI| 8y&|=z #v&,tP*lqICu]`^V%seAUɖU r߾hvO,EY=H FWTep,4AvE9#$(ル.lmJba1Z[1j2eA/":C6",T_ ߮Q) 1H;_tuy5{b>,M4˦&F?nd'U5}GǸK[>TwBDl #·͢w trŻsVѼ[>hwBDw))d}n~;-}4izA5U1 %Cʊ&迬&|1Z7iW>r  z\?D )`z[6!€/hW"` YQ V;J9Ѝ6V@`ρX`7Q~v@,7r}tOݹ>w 봗3I!F7Ai\ PŌ_TϹ&5rK+l@9J*< 3,,5 L0 WFFv,Ab(%Cx.ah:!$;-0VJƫHI,ĭbRYqq v+]dbOyf!aѳr$}??.5_v! v}o:bh3ly٪5!= WHrǀ/hQɴc[vFjW%GǑV߽?fRVʼIBiJLZj3 ^ *@yYѪ;iK#^8F_#ڇL.^|{t<ʺU"v{LVɵLwtIV,:R7։*}XȑhM{b [.).m[S 0wOnIn}XȑhM 1C$r1H1wtn;:b4_#7mJ1P&x;@eGwcy[~r<(d`b#̾:W3 DxuxL)ޓqd+_ASUNm$;Hp3:SBJAjRR"iDr:KYvNTRgjWrF6DaދǗ>[q |ȹ=-׍()irR98zh…DAqDW^ؤ|})t%wDA˅C QykܵUI0vz| & ~/x|uUEWvc` D?9Ԃ2(dZT }I?N ՇAyOx^lD/遢!qw6eFXmA Y)Қ^7iP&ÒS$N!Ӓd0X'BʞEdJjM \ -Đ1i)ZjɁ;B\%BR,\9C0Bt]L}{@M/[Yw9J3&A!b,Xhp7ޓuDߔӗ;`@:2ehxr=6`]3{'2sdBWe@M+Y[Ee!=jk Kd١4gd9(1FߔR!.dYXx'הk Qx"W&c W:HIYHȉLtB{@ 5%L6\X,R <"(]f{2$"Afz@-l^jvb&k,~7A eIRl#-UDʳ"x`BѸfn7*g4`ѱ3rPmQQH1zO$B3Mώ<)jCyM+5,:âW#2YF#Ld[♕:!gI2ך?쾚QiM e}HL&ύ$IК&=tr 6=$) Β`\h g.$Rj m23t jzںEqQY5\ ik2fPkCҾ֑vΖO',c5F5F&&d։ڌT)r䐐~#yf!=)ao0X:DBX9A+:C":ArF.4Z jה2$EeyR\# 09(ʦfIb'Pۑҏa?RL7 W>l+!I2>Wl I#{+ѡaV% H2h4h23]<$ Bd8hK|yEBK̀'7*h9+Ӕ=+^/;}46ZpYF&v!]᰷ͲܕN_>J_<}ԳEP ڇ p}{~>h~Yč?7= }R8d] 261"$5>pN_n&.t eli_)ݜç˶ΐxʖn]w/twQX’N,Ylˉ@jkbq}/N(5> KX9t%+·߈6=\)[G)[aQ3eWʐKum%C*/%qmӋ-pvA|5ڥo1#SnfWAc$KUaovctlV=|s cwOR0r]i 1%oc_?w?ʗ~Kp;ژ\]v08k @@j?0!;1ew;iU&!hï=/3?][h *N@ ҶY&kǝ\oofmR6մ̣[lȮMc 3+-Ѿش :j޶ڪ""{i:738\&i.|pIXܑ#u 7đ [3?䙞rmۡ=򛟯;`U-^"i NKz<,ҕ:GI;oENQ}<>Fifz&FbDh4=& EҔJd}i+iDmIx3 Y鞖*?wZZg}s}w~ϋ˒tyM0P M״\֟ݻ|1mw-oڿ'ov]{$O2؈1#p7 eww~d\Pcf1I:NZa" Oj;A,"{]JF\r,;ĚENkf~pPi//| XZ LTZ4lY v#ϣf hwmհxxGƛ {6\`UC)-nUׂŗXhWq 9HQfwA+5oYB-纕R[ߪNwKyzu?@NN[mZgױoomIQ溛G=9 iQ)Ƕ҈{Cl;jxKcڙO; {Kgހi6|BzFC %QopsI612R9WBӻN^9++PIFt|8=ECIy"Eϻ=@\\5B,NDG? -G.Y`N+1ǭPl~jd/_<EO>"OB5-N/_ RH23;pJwT1hx`Q IG+r6z 3Duڕjs:cXП{GU}zDE|7CnK'쓳 m0&g gyѳM'Gnw yx;=H.9|^Xɍx>Ww:lhbv|vXs<쪋/Ⱦ٫r*#7~} ^=~v9!ׯQ9{:%ͮj|PV&N?~~F!Moblc%ڎ2 FXk^jXwΓ6/Ӈ-bq_ :+k1؃檯@}jD֭*TrUS@imȏ,MNqhLr BB4rj.mb4v/ukEǵuk *ꤴ+:JAKԥB&)}pWZǽdU\ɴЙ<)^QWf#ԟKiN}JLS+&xe8GkYnP8o"̢6ZS6"'Zv:NĐb0Jb0~ϓ I:66FxۈqYE)Y-~s+GG-E+Om;ߥF^gP[y܇ee)XMR*+#jdptR+yxM40)L7n<&]94iU&6|0粴Аdh!LF 6s%D9+;чővp̓ILo+@#  d=gHB3OuSjwuA^`2`5I+/.(;o-EoomPF\ ְZ斖LwRZ1/F֖%zJpDힱS-/2-z6=YZU q4V%jEE{<^V8 QG8蘍 Uh>ޜg# [kk8xa@..X_ܥbL_tHr1Ag;dӨhbP*ߘktnnblXDhPxLц] PO5}'/Hs(mMmQd+YV 8n UcB􍖙BKbtX!6š:p= QM\44"iAL(֙Y1dݤƝ6e9J%&w}8zEQt2x`NJt%텔=|M{=J~REu GDZ|IS6Ǭʲ/M0XS02'g"$) AzB:rL<v>ib@KNr BlTʐLId VڤE5Kh"71xy)1楚{BX8=|B׷A1L8#Τowi@ѽi!hq\#Pܚ ː){(~˹eޑ-q#+X|M}J)KyJJ5`,ڔlC C ɱ(ʢH>hL"=x[C'ifO\ SIVXx 3dyxh\pA· z."v@%!GZ?6c.Z$dF;̐g{6l9(G捾 koS=_srN<h/uʩRȑ8^{_2[uLMn5Y(JrbdBJgeVo(@ .f(Fߥko%ǟ-ԘD L!(QL\@@l!eTZ& l`D܆4~g X~6,h(+$"V`$2 +CF62Ti%$%i"3KP6!pPeBh}rHYje&<@V3٭\DvA8Hx5b;_HnMC]HIM iI 1F 9skŤE/{c(pը]TW ݢm/oTnH ߙawEo%k_||{5b뵶aڿ7e=OW?`?Ov>Ze`:1}7"DsEi7x\₤R0L\q d穦k0J0+c`B] ]l֭o>[Sj)XȖSG]Z@<tR!yUl-Y~{ &ECșxƢE^?,ꢜz1Mgl:M÷ME6v8^ 96E9z L{ mAQ.(*!.D=CNˊ5_wXteVғi{'w)cn^ Wz~[zk3z 9F̶t#A ubQG04.-[r,S$HO Ak;]_?{XJ2ȾuCZXFWXfS@mړؽv}] -W-:՝7>hwL:?>ܛq׽|4[@"b#ıa 'A;3{K:ͮy8~i)HӀ/ۡoxA,;ѳ[^~b+܃I0o.qG&oƦVs?"W}~·tHOszUq•K7PBW % L +OL{Ҟ@hk>CОgv.W|c1=a|Im%n+d1 隭;Pq^gW'1SUG{S9;YL:=6`ׄ> .kĊd/e`g;6` 9:ڟ map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 09:01:02.47661368 +0000 UTC m=+0.782489884,LastTimestamp:2026-01-31 09:01:02.47661368 +0000 UTC m=+0.782489884,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.503819 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.503888 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.503909 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.503926 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.503942 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.503955 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.503969 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.503985 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504000 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504012 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504059 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504072 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504086 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504105 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504121 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504134 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504147 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504160 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504173 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504187 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504225 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504241 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504263 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504276 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504289 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504302 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504315 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504331 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504343 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504354 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504372 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504419 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504433 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504443 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504455 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504467 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504478 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504488 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504499 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504510 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504521 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504532 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504543 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504554 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504565 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504576 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504589 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504600 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504610 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504621 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504632 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504744 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504763 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504773 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504784 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504793 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504803 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504812 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504824 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504865 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504875 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504885 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504895 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504907 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504920 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504930 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504940 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504952 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504962 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.504977 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505002 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505017 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505030 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505051 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505063 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505074 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505084 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505095 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505105 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505117 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505128 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505139 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505150 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505161 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505176 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505191 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505203 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505217 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505230 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505243 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505254 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505266 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505278 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505292 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505307 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505321 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505343 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505357 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505369 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505385 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505397 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505409 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505421 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505435 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505455 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505470 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505483 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505494 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505505 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505516 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505526 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505538 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505550 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505562 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505572 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505583 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505594 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505604 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505615 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505624 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505633 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505644 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505653 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505697 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505714 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505725 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505734 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505745 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505758 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505769 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505779 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505792 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505815 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505831 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.505845 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.510357 4732 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511341 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511368 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511389 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511403 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511416 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511428 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511446 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511461 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511476 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511490 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511501 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511513 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511690 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.511709 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512025 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512049 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512063 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512076 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512089 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512101 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512114 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512127 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512140 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512157 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512171 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512185 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512201 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512216 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512230 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512272 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512290 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512303 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512316 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512329 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512341 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512354 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.512365 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513275 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513310 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513326 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513342 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513355 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513370 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513383 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513395 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513407 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513419 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513432 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513447 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513460 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513474 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513487 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513502 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513515 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513528 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513544 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513557 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513569 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513585 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513598 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513612 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513626 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513641 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513654 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513691 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513705 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513717 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513730 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513743 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513754 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513769 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513782 4732 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513794 4732 reconstruct.go:97] "Volume reconstruction finished" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.513804 4732 reconciler.go:26] "Reconciler: start to sync state" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.514132 4732 manager.go:324] Recovery completed Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.524992 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.526770 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.526958 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.527041 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.527808 4732 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.527828 4732 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.527848 4732 state_mem.go:36] "Initialized new in-memory state store" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.539036 4732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.541110 4732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.541282 4732 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.541362 4732 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 09:01:02 crc kubenswrapper[4732]: E0131 09:01:02.541554 4732 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 09:01:02 crc kubenswrapper[4732]: W0131 09:01:02.545922 4732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:02 crc kubenswrapper[4732]: E0131 09:01:02.546009 4732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.231:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.572611 4732 policy_none.go:49] "None policy: Start" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.574684 4732 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.574747 4732 state_mem.go:35] "Initializing new in-memory state store" Jan 31 09:01:02 crc kubenswrapper[4732]: E0131 09:01:02.580817 4732 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.633547 4732 manager.go:334] "Starting Device Plugin manager" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.633740 4732 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.633760 4732 server.go:79] "Starting device plugin registration server" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.634233 4732 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.634254 4732 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.634476 4732 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.634550 4732 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.634557 4732 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 09:01:02 crc kubenswrapper[4732]: E0131 09:01:02.642268 4732 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.643466 4732 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.643743 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.645891 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.645933 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.645949 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.646174 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.646409 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.646484 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.647454 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.647486 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.647500 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.647588 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.647801 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.647874 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.648118 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.648162 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.648179 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.648586 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.648621 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.648633 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.648816 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.648940 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.648983 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.650093 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.650128 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.650141 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.650217 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.650230 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.650240 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.650280 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.650489 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.650564 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.650941 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.650961 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.650972 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.651165 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.651190 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.651217 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.651392 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.651428 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.652406 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.652428 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.652437 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.652451 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.652504 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.652515 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4732]: E0131 09:01:02.684354 4732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" interval="400ms" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.716639 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.716711 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.716740 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.716785 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.716806 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.716847 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.716926 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.716969 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.716993 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.717025 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.717053 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.717083 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.717105 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.717124 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.717141 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.734505 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.735834 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.735883 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.735900 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.735931 4732 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:01:02 crc kubenswrapper[4732]: E0131 09:01:02.736461 4732 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.231:6443: connect: connection refused" node="crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818027 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818091 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818108 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818123 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818155 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818171 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818187 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818202 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818218 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818231 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818224 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818270 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818294 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818299 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818323 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818246 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818281 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818245 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818337 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818366 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818331 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818347 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818507 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818557 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818582 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818614 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818700 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818709 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818713 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.818743 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.937597 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.939824 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.939870 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.939882 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.939909 4732 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:01:02 crc kubenswrapper[4732]: E0131 09:01:02.940389 4732 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.231:6443: connect: connection refused" node="crc" Jan 31 09:01:02 crc kubenswrapper[4732]: I0131 09:01:02.995554 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.016635 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.037148 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.044349 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.048033 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:03 crc kubenswrapper[4732]: E0131 09:01:03.085747 4732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" interval="800ms" Jan 31 09:01:03 crc kubenswrapper[4732]: W0131 09:01:03.135444 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-69f12eef6fb6e235cac8ad04bca66ee300952ad6934c65f7ca40d82cd0de8532 WatchSource:0}: Error finding container 69f12eef6fb6e235cac8ad04bca66ee300952ad6934c65f7ca40d82cd0de8532: Status 404 returned error can't find the container with id 69f12eef6fb6e235cac8ad04bca66ee300952ad6934c65f7ca40d82cd0de8532 Jan 31 09:01:03 crc kubenswrapper[4732]: W0131 09:01:03.137350 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a5f6ba4146ca4fd45a86b175655c87df100727d86a0bc7fd3f99a3c92a90f415 WatchSource:0}: Error finding container a5f6ba4146ca4fd45a86b175655c87df100727d86a0bc7fd3f99a3c92a90f415: Status 404 returned error can't find the container with id a5f6ba4146ca4fd45a86b175655c87df100727d86a0bc7fd3f99a3c92a90f415 Jan 31 09:01:03 crc kubenswrapper[4732]: W0131 09:01:03.138007 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-fd98298222cb06cca26e7e01f6e6a136f1f474083d126c37e5a1766668a8b049 WatchSource:0}: Error finding container fd98298222cb06cca26e7e01f6e6a136f1f474083d126c37e5a1766668a8b049: Status 404 returned error can't find the container with id fd98298222cb06cca26e7e01f6e6a136f1f474083d126c37e5a1766668a8b049 Jan 31 09:01:03 crc kubenswrapper[4732]: W0131 09:01:03.138701 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0df22fa47819f792794c8f59b16fedf553f1b0f61ca7d9b7a7e7183322bc1df8 WatchSource:0}: Error finding container 0df22fa47819f792794c8f59b16fedf553f1b0f61ca7d9b7a7e7183322bc1df8: Status 404 returned error can't find the container with id 0df22fa47819f792794c8f59b16fedf553f1b0f61ca7d9b7a7e7183322bc1df8 Jan 31 09:01:03 crc kubenswrapper[4732]: W0131 09:01:03.140193 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a679f267b1a91b098a7388467a2bed47ca2a5417419f81d68cb7d18ed0f7312a WatchSource:0}: Error finding container a679f267b1a91b098a7388467a2bed47ca2a5417419f81d68cb7d18ed0f7312a: Status 404 returned error can't find the container with id a679f267b1a91b098a7388467a2bed47ca2a5417419f81d68cb7d18ed0f7312a Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.340536 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.342031 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.342073 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.342085 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.342111 4732 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:01:03 crc kubenswrapper[4732]: E0131 09:01:03.342543 4732 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.231:6443: connect: connection refused" node="crc" Jan 31 09:01:03 crc kubenswrapper[4732]: W0131 09:01:03.380348 4732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:03 crc kubenswrapper[4732]: E0131 09:01:03.380439 4732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.231:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:01:03 crc kubenswrapper[4732]: W0131 09:01:03.430713 4732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:03 crc kubenswrapper[4732]: E0131 09:01:03.430800 4732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.231:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.477977 4732 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.480058 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:02:44.562395335 +0000 UTC Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.547273 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0df22fa47819f792794c8f59b16fedf553f1b0f61ca7d9b7a7e7183322bc1df8"} Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.548328 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a5f6ba4146ca4fd45a86b175655c87df100727d86a0bc7fd3f99a3c92a90f415"} Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.549327 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a679f267b1a91b098a7388467a2bed47ca2a5417419f81d68cb7d18ed0f7312a"} Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.550326 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"69f12eef6fb6e235cac8ad04bca66ee300952ad6934c65f7ca40d82cd0de8532"} Jan 31 09:01:03 crc kubenswrapper[4732]: I0131 09:01:03.551944 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fd98298222cb06cca26e7e01f6e6a136f1f474083d126c37e5a1766668a8b049"} Jan 31 09:01:03 crc kubenswrapper[4732]: W0131 09:01:03.694442 4732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:03 crc kubenswrapper[4732]: E0131 09:01:03.694536 4732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.231:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:01:03 crc kubenswrapper[4732]: W0131 09:01:03.791554 4732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:03 crc kubenswrapper[4732]: E0131 09:01:03.791658 4732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.231:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:01:03 crc kubenswrapper[4732]: E0131 09:01:03.887099 4732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" interval="1.6s" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.143309 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.144567 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.144607 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.144620 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.144653 4732 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:01:04 crc kubenswrapper[4732]: E0131 09:01:04.145218 4732 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.231:6443: connect: connection refused" node="crc" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.356723 4732 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 09:01:04 crc kubenswrapper[4732]: E0131 09:01:04.357869 4732 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.231:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.478264 4732 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.481227 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:47:17.581287173 +0000 UTC Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.556632 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675"} Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.556696 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.556950 4732 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675" exitCode=0 Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.557905 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.557963 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.557984 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.558780 4732 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3" exitCode=0 Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.558885 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3"} Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.558974 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.560316 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.560340 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.560350 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.562870 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da"} Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.564783 4732 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2" exitCode=0 Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.564832 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.564840 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2"} Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.565544 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.565576 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.565588 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.567562 4732 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f" exitCode=0 Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.567615 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f"} Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.568401 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.570791 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.570830 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.570840 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.574062 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.575078 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.575129 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:04 crc kubenswrapper[4732]: I0131 09:01:04.575142 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:05 crc kubenswrapper[4732]: W0131 09:01:05.076342 4732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:05 crc kubenswrapper[4732]: E0131 09:01:05.076473 4732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.231:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.478046 4732 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.481512 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:05:17.307762429 +0000 UTC Jan 31 09:01:05 crc kubenswrapper[4732]: E0131 09:01:05.488417 4732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" interval="3.2s" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.572599 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7"} Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.572686 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd"} Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.573978 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e897d26ac7b103c21a9cb176f1a90a11098a9a2f6a4cd28f697a90ee7c9f6f9a"} Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.574001 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7dadc6987ef6ef2593fa8aa0ec3903dc3cfea907f73fb68dca2c141362577bb1"} Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.575127 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"87eb9563b2ca7420cbb70ccd32aa77f2a82c769d57eceacd99a4d94cb1c3a0d7"} Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.575241 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.575887 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.575910 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.575918 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.577806 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca"} Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.577858 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d"} Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.579718 4732 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8" exitCode=0 Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.579778 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8"} Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.579863 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.580744 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.580772 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.580783 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.745531 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.746863 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.746894 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.746904 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:05 crc kubenswrapper[4732]: I0131 09:01:05.746930 4732 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:01:05 crc kubenswrapper[4732]: E0131 09:01:05.747375 4732 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.231:6443: connect: connection refused" node="crc" Jan 31 09:01:05 crc kubenswrapper[4732]: W0131 09:01:05.881701 4732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:05 crc kubenswrapper[4732]: E0131 09:01:05.881798 4732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.231:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:01:06 crc kubenswrapper[4732]: W0131 09:01:06.024372 4732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:06 crc kubenswrapper[4732]: E0131 09:01:06.024451 4732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.231:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:01:06 crc kubenswrapper[4732]: W0131 09:01:06.470745 4732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:06 crc kubenswrapper[4732]: E0131 09:01:06.470852 4732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.231:6443: connect: connection refused" logger="UnhandledError" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.481129 4732 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.482033 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:52:00.740642609 +0000 UTC Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.588732 4732 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf" exitCode=0 Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.588874 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.588873 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf"} Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.589949 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.589992 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.590009 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.593346 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"849206839f7a8bd7f8a6a6fa78eda460866d8039008158f82dcdc4743bd4eed2"} Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.593378 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9"} Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.593396 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722"} Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.593457 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.594444 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.594483 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.594493 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.596380 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.596371 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a31769db7f40442ff410b053055e413b11d7dba7d48dfce853ca38e7b9f7595e"} Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.597027 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.597062 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.597073 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.599067 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.599162 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.599064 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a"} Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.600163 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.600212 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.600226 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.600176 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.600274 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:06 crc kubenswrapper[4732]: I0131 09:01:06.600305 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.242032 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.478327 4732 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.231:6443: connect: connection refused Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.483118 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:07:59.577375861 +0000 UTC Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.607064 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c"} Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.607166 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3"} Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.607182 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2"} Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.609095 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.610915 4732 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="849206839f7a8bd7f8a6a6fa78eda460866d8039008158f82dcdc4743bd4eed2" exitCode=255 Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.611052 4732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.611052 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"849206839f7a8bd7f8a6a6fa78eda460866d8039008158f82dcdc4743bd4eed2"} Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.611088 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.611135 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.611202 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.612342 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.612370 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.612381 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.612750 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.612781 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.612793 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.612836 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.612866 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.612881 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:07 crc kubenswrapper[4732]: I0131 09:01:07.613444 4732 scope.go:117] "RemoveContainer" containerID="849206839f7a8bd7f8a6a6fa78eda460866d8039008158f82dcdc4743bd4eed2" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.472754 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.483749 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:17:32.324285501 +0000 UTC Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.525917 4732 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.616840 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.620272 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e"} Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.620380 4732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.620453 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.621723 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.621770 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.621787 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.625940 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1"} Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.625994 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134"} Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.626041 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.626061 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.627217 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.627273 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.627233 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.627313 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.627328 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.627290 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.947849 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.950059 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.950119 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.950134 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:08 crc kubenswrapper[4732]: I0131 09:01:08.950166 4732 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.015612 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.386571 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.484203 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:12:59.48013823 +0000 UTC Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.628274 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.628384 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.628411 4732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.628493 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.629783 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.629820 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.629832 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.629839 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.629891 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.629914 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.629982 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.630041 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:09 crc kubenswrapper[4732]: I0131 09:01:09.630061 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:10 crc kubenswrapper[4732]: I0131 09:01:10.242493 4732 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 09:01:10 crc kubenswrapper[4732]: I0131 09:01:10.242583 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 09:01:10 crc kubenswrapper[4732]: I0131 09:01:10.484830 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:13:20.56940044 +0000 UTC Jan 31 09:01:10 crc kubenswrapper[4732]: I0131 09:01:10.630973 4732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:10 crc kubenswrapper[4732]: I0131 09:01:10.631053 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:10 crc kubenswrapper[4732]: I0131 09:01:10.632090 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:10 crc kubenswrapper[4732]: I0131 09:01:10.632129 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:10 crc kubenswrapper[4732]: I0131 09:01:10.632141 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:11 crc kubenswrapper[4732]: I0131 09:01:11.485599 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:30:21.108816923 +0000 UTC Jan 31 09:01:11 crc kubenswrapper[4732]: I0131 09:01:11.713881 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 09:01:11 crc kubenswrapper[4732]: I0131 09:01:11.714125 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:11 crc kubenswrapper[4732]: I0131 09:01:11.715635 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:11 crc kubenswrapper[4732]: I0131 09:01:11.715697 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:11 crc kubenswrapper[4732]: I0131 09:01:11.715709 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:11 crc kubenswrapper[4732]: I0131 09:01:11.916629 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:11 crc kubenswrapper[4732]: I0131 09:01:11.916891 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:11 crc kubenswrapper[4732]: I0131 09:01:11.918264 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:11 crc kubenswrapper[4732]: I0131 09:01:11.918354 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:11 crc kubenswrapper[4732]: I0131 09:01:11.918381 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:12 crc kubenswrapper[4732]: I0131 09:01:12.151427 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:12 crc kubenswrapper[4732]: I0131 09:01:12.151703 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:12 crc kubenswrapper[4732]: I0131 09:01:12.153206 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:12 crc kubenswrapper[4732]: I0131 09:01:12.153248 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:12 crc kubenswrapper[4732]: I0131 09:01:12.153264 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:12 crc kubenswrapper[4732]: I0131 09:01:12.486521 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:18:37.037660578 +0000 UTC Jan 31 09:01:12 crc kubenswrapper[4732]: I0131 09:01:12.631767 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:01:12 crc kubenswrapper[4732]: I0131 09:01:12.632027 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:12 crc kubenswrapper[4732]: I0131 09:01:12.633449 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:12 crc kubenswrapper[4732]: I0131 09:01:12.633525 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:12 crc kubenswrapper[4732]: I0131 09:01:12.633543 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:12 crc kubenswrapper[4732]: E0131 09:01:12.642438 4732 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.381195 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.381449 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.383068 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.383138 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.383152 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.386917 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.487245 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:52:15.00677578 +0000 UTC Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.638980 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.640214 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.640276 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.640295 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.780032 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.780424 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.781934 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.782004 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:13 crc kubenswrapper[4732]: I0131 09:01:13.782027 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:14 crc kubenswrapper[4732]: I0131 09:01:14.488248 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 19:30:41.741783079 +0000 UTC Jan 31 09:01:15 crc kubenswrapper[4732]: I0131 09:01:15.489149 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 18:50:43.566853662 +0000 UTC Jan 31 09:01:16 crc kubenswrapper[4732]: I0131 09:01:16.489736 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:50:17.969224935 +0000 UTC Jan 31 09:01:17 crc kubenswrapper[4732]: I0131 09:01:17.361560 4732 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 09:01:17 crc kubenswrapper[4732]: I0131 09:01:17.361653 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 09:01:17 crc kubenswrapper[4732]: I0131 09:01:17.490407 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:47:42.61310711 +0000 UTC Jan 31 09:01:18 crc kubenswrapper[4732]: I0131 09:01:18.365402 4732 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 09:01:18 crc kubenswrapper[4732]: I0131 09:01:18.365477 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 09:01:18 crc kubenswrapper[4732]: I0131 09:01:18.369186 4732 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 31 09:01:18 crc kubenswrapper[4732]: I0131 09:01:18.369251 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 09:01:18 crc kubenswrapper[4732]: I0131 09:01:18.476824 4732 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]log ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]etcd ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/priority-and-fairness-filter ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/start-apiextensions-informers ok Jan 31 09:01:18 crc kubenswrapper[4732]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Jan 31 09:01:18 crc kubenswrapper[4732]: [-]poststarthook/crd-informer-synced failed: reason withheld Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/start-system-namespaces-controller ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 31 09:01:18 crc kubenswrapper[4732]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 31 09:01:18 crc kubenswrapper[4732]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/bootstrap-controller ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/start-kube-aggregator-informers ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 31 09:01:18 crc kubenswrapper[4732]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 31 09:01:18 crc kubenswrapper[4732]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]autoregister-completion ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/apiservice-openapi-controller ok Jan 31 09:01:18 crc kubenswrapper[4732]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 31 09:01:18 crc kubenswrapper[4732]: livez check failed Jan 31 09:01:18 crc kubenswrapper[4732]: I0131 09:01:18.476883 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:01:18 crc kubenswrapper[4732]: I0131 09:01:18.491174 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:39:50.075828495 +0000 UTC Jan 31 09:01:19 crc kubenswrapper[4732]: I0131 09:01:19.491275 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:46:27.266453574 +0000 UTC Jan 31 09:01:20 crc kubenswrapper[4732]: I0131 09:01:20.243030 4732 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 09:01:20 crc kubenswrapper[4732]: I0131 09:01:20.243139 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 09:01:20 crc kubenswrapper[4732]: I0131 09:01:20.492020 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:24:08.715428717 +0000 UTC Jan 31 09:01:21 crc kubenswrapper[4732]: I0131 09:01:21.492791 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:08:04.135043559 +0000 UTC Jan 31 09:01:22 crc kubenswrapper[4732]: I0131 09:01:22.158021 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:22 crc kubenswrapper[4732]: I0131 09:01:22.158174 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:22 crc kubenswrapper[4732]: I0131 09:01:22.159469 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:22 crc kubenswrapper[4732]: I0131 09:01:22.159524 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:22 crc kubenswrapper[4732]: I0131 09:01:22.159534 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:22 crc kubenswrapper[4732]: I0131 09:01:22.493877 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:40:40.655878001 +0000 UTC Jan 31 09:01:22 crc kubenswrapper[4732]: E0131 09:01:22.642702 4732 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.362968 4732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.364864 4732 trace.go:236] Trace[766566254]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 09:01:11.731) (total time: 11632ms): Jan 31 09:01:23 crc kubenswrapper[4732]: Trace[766566254]: ---"Objects listed" error: 11632ms (09:01:23.364) Jan 31 09:01:23 crc kubenswrapper[4732]: Trace[766566254]: [11.632994324s] [11.632994324s] END Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.364892 4732 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.366219 4732 trace.go:236] Trace[445597049]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 09:01:10.233) (total time: 13132ms): Jan 31 09:01:23 crc kubenswrapper[4732]: Trace[445597049]: ---"Objects listed" error: 13132ms (09:01:23.366) Jan 31 09:01:23 crc kubenswrapper[4732]: Trace[445597049]: [13.132602516s] [13.132602516s] END Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.366246 4732 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.367309 4732 trace.go:236] Trace[165649746]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 09:01:08.647) (total time: 14720ms): Jan 31 09:01:23 crc kubenswrapper[4732]: Trace[165649746]: ---"Objects listed" error: 14720ms (09:01:23.367) Jan 31 09:01:23 crc kubenswrapper[4732]: Trace[165649746]: [14.720087412s] [14.720087412s] END Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.367374 4732 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.368295 4732 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.368477 4732 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.369284 4732 trace.go:236] Trace[551810421]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 09:01:11.298) (total time: 12070ms): Jan 31 09:01:23 crc kubenswrapper[4732]: Trace[551810421]: ---"Objects listed" error: 12070ms (09:01:23.369) Jan 31 09:01:23 crc kubenswrapper[4732]: Trace[551810421]: [12.070646424s] [12.070646424s] END Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.369311 4732 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.373081 4732 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.436248 4732 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33680->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.436341 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33680->192.168.126.11:17697: read: connection reset by peer" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.471741 4732 apiserver.go:52] "Watching apiserver" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.475844 4732 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.476243 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.476744 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.476750 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.476850 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.477115 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.477170 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.477221 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.477238 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.477294 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.477534 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.478676 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.478954 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.479067 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.479127 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.479282 4732 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.479317 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.480925 4732 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.481843 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.482124 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.482474 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.482745 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.482975 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.483146 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.483431 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.494028 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:56:25.651591047 +0000 UTC Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.504113 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.514095 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.520912 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.526798 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.536618 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.546808 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.558410 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.568711 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570225 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570282 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570308 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570332 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570356 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570388 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570418 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570442 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570467 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.570510 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:01:24.07046852 +0000 UTC m=+22.376344734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570575 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570617 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570644 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570693 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570717 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570739 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570766 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570792 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570818 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570841 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570870 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570896 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570891 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570923 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571000 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571030 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571055 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571078 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571107 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571130 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571153 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571175 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571201 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571225 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571248 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571283 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571308 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571335 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571358 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571380 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571405 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571429 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571453 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571477 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571542 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571566 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571591 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571647 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571686 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571707 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571726 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571746 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571802 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571825 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571847 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571876 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571901 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570900 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571978 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572004 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572023 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.570920 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571159 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572074 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571135 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571199 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571223 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571284 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571470 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571485 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571636 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571743 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571808 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571829 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571839 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571905 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.571947 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572087 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572257 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572317 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572042 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572409 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572441 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572468 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572493 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572517 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572540 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572561 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572569 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572586 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572609 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572623 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572634 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572683 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572729 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572757 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572757 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572787 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572823 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572847 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572872 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572893 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572917 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572938 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572959 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572958 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.572981 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573004 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573028 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573049 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573071 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573092 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573112 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573136 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573157 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573156 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573141 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573158 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573202 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573185 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573307 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573347 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573352 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573368 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573445 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573462 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573484 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573484 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573506 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573527 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573551 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573572 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573578 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573596 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573744 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573749 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573776 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573802 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573826 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573850 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.573873 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.574998 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575032 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575062 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575120 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575159 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575184 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575214 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575239 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575259 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575283 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575310 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575359 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575524 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575727 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575743 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575857 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.576048 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.576098 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.576457 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.578262 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.578341 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.578341 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.578371 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.578933 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.575335 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.578985 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579067 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579133 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579216 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579262 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579295 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579330 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579358 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579395 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579427 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579461 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579491 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579524 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579556 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579584 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579640 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579703 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579927 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579953 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579982 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580011 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580039 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580070 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580098 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580124 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580156 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580185 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580222 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580256 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580288 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580322 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580356 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580399 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580451 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580507 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580556 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580608 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580654 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580713 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580759 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580803 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580838 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580885 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580930 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580973 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.581805 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.581878 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.581920 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579166 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579340 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579379 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579558 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.582295 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579605 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.579966 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580228 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580526 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580634 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580717 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.580876 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.578601 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.581581 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.581894 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.582054 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.582273 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.582273 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.582825 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.583209 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.583222 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.583455 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.584335 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.584345 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.584548 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.584611 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.584627 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.584983 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.585516 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.585605 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.585781 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.585947 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.585870 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.586026 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.586117 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.586139 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.586333 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.586352 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.586380 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.586429 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.587087 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.587715 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.589190 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.589309 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.589474 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.589899 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.589964 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590007 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590098 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590144 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590184 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590267 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590310 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590351 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590381 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590421 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590352 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590451 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590440 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590496 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590566 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590710 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.591222 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.591393 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.591470 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.591558 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.591722 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.592950 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.593004 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.593030 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.593051 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.593083 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.593117 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.593305 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.593340 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.590979 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.591985 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.591996 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.592014 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.592139 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.592168 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.592449 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.594064 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.592459 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.592870 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.593074 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.593099 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.591858 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.594275 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.594516 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.594849 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.595032 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.595129 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.595181 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.595436 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.595598 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.595712 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.595765 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.595785 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.595842 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.595907 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.596206 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.596224 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.596233 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.596314 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.596713 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.596750 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.596803 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.596850 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.596882 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.596930 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.597077 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.597218 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.597276 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.597425 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.597413 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.597475 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.597493 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.597595 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.597788 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.597816 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.597903 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.597936 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.597803 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598157 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598142 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598264 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598535 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598563 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598575 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598715 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598749 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598784 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598811 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598841 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598869 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598906 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598948 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.598981 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599016 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599047 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599072 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599096 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599119 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599142 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599166 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599188 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599244 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599260 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599274 4732 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599289 4732 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599303 4732 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599315 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599327 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599340 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599353 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599365 4732 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599377 4732 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599390 4732 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599404 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599418 4732 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599431 4732 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599445 4732 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599457 4732 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599470 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599482 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599494 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.599016 4732 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.599593 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:24.099567208 +0000 UTC m=+22.405443412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.599900 4732 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.599970 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:24.09994946 +0000 UTC m=+22.405825754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.599993 4732 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.600948 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.601459 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.602366 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.602724 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.602814 4732 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604556 4732 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604586 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604603 4732 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604620 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604635 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604649 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604677 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604800 4732 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604814 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604828 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604840 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604854 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604865 4732 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604877 4732 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604889 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604900 4732 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604938 4732 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604951 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604964 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604977 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604987 4732 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605060 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605073 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605087 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605118 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605131 4732 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605144 4732 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605173 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605185 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605334 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605351 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605364 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605376 4732 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605389 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605400 4732 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605410 4732 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605421 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605433 4732 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605445 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605458 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605470 4732 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605482 4732 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605496 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605508 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605520 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605531 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605544 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605557 4732 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605570 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605583 4732 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605594 4732 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605607 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605619 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605630 4732 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605642 4732 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605652 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605693 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605707 4732 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605719 4732 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605729 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605740 4732 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.602805 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.603061 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.603077 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.603598 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.603743 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.603746 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.603879 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604117 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605751 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605909 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605941 4732 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605952 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605963 4732 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605973 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.605983 4732 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606013 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606024 4732 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606033 4732 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606044 4732 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606054 4732 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606067 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606102 4732 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606113 4732 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606123 4732 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606132 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606142 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606169 4732 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606182 4732 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606197 4732 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606208 4732 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606220 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606255 4732 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606268 4732 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606280 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606292 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606303 4732 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606337 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606354 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606369 4732 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606381 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606414 4732 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606427 4732 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606440 4732 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606452 4732 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606464 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606474 4732 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606512 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606524 4732 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606536 4732 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606548 4732 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606583 4732 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606596 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606609 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606622 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606634 4732 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606680 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606693 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606708 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606718 4732 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606754 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606766 4732 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606777 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606790 4732 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606801 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606836 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606850 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606862 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606873 4732 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606912 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606931 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606942 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606955 4732 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606968 4732 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.606979 4732 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.604503 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.611481 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.612593 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.612616 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.612641 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.612642 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.612674 4732 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.612756 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:24.112728875 +0000 UTC m=+22.418605079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.612738 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.612824 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.612993 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.616575 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.616750 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.617568 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.617768 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.617787 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.617607 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.617967 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.617998 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.618017 4732 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.618274 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.618293 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:24.118249808 +0000 UTC m=+22.424126082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.618280 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.618369 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.618982 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.619263 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.619782 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.620461 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.625600 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.625785 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.625909 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.626031 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.626077 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.626114 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.626418 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.626513 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.627024 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.630192 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.631158 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.634736 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.634778 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.635328 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.636010 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.650625 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.661178 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.662222 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.666822 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.667351 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.668218 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.669970 4732 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e" exitCode=255 Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.670045 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e"} Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.670119 4732 scope.go:117] "RemoveContainer" containerID="849206839f7a8bd7f8a6a6fa78eda460866d8039008158f82dcdc4743bd4eed2" Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.681101 4732 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.681433 4732 scope.go:117] "RemoveContainer" containerID="c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e" Jan 31 09:01:23 crc kubenswrapper[4732]: E0131 09:01:23.682558 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.685742 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.697210 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.707128 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.707813 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.707901 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.707960 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.707960 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.707981 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708000 4732 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708017 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708033 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708073 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708122 4732 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708143 4732 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708152 4732 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708162 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708170 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708180 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708191 4732 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708228 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708250 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708265 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708282 4732 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708297 4732 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708312 4732 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708327 4732 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708341 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708355 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708369 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708384 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708439 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708456 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708472 4732 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708518 4732 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708533 4732 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708548 4732 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708563 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708622 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708637 4732 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708653 4732 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708718 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708734 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708749 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708797 4732 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708812 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708824 4732 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708866 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708886 4732 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708897 4732 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708917 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708957 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708971 4732 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.708982 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.716555 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.728703 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849206839f7a8bd7f8a6a6fa78eda460866d8039008158f82dcdc4743bd4eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:07Z\\\",\\\"message\\\":\\\"W0131 09:01:06.636539 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 09:01:06.636963 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769850066 cert, and key in /tmp/serving-cert-2193001133/serving-signer.crt, /tmp/serving-cert-2193001133/serving-signer.key\\\\nI0131 09:01:06.851895 1 observer_polling.go:159] Starting file observer\\\\nW0131 09:01:06.856742 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 09:01:06.856927 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:06.857621 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2193001133/tls.crt::/tmp/serving-cert-2193001133/tls.key\\\\\\\"\\\\nF0131 09:01:07.202355 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.740249 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.750493 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.799745 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.804460 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.815099 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.816770 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.817165 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.822953 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.823722 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.826952 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: W0131 09:01:23.831539 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-c78415f5f73bd74f106115622a3e6ff0efd80249c0ec92e76df9492aa9376025 WatchSource:0}: Error finding container c78415f5f73bd74f106115622a3e6ff0efd80249c0ec92e76df9492aa9376025: Status 404 returned error can't find the container with id c78415f5f73bd74f106115622a3e6ff0efd80249c0ec92e76df9492aa9376025 Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.838023 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: W0131 09:01:23.844115 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-73dfb0efcbb71b6296024c863031fb7180dfca4437ee985ef84d063cf78fd984 WatchSource:0}: Error finding container 73dfb0efcbb71b6296024c863031fb7180dfca4437ee985ef84d063cf78fd984: Status 404 returned error can't find the container with id 73dfb0efcbb71b6296024c863031fb7180dfca4437ee985ef84d063cf78fd984 Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.851069 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849206839f7a8bd7f8a6a6fa78eda460866d8039008158f82dcdc4743bd4eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:07Z\\\",\\\"message\\\":\\\"W0131 09:01:06.636539 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 09:01:06.636963 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769850066 cert, and key in /tmp/serving-cert-2193001133/serving-signer.crt, /tmp/serving-cert-2193001133/serving-signer.key\\\\nI0131 09:01:06.851895 1 observer_polling.go:159] Starting file observer\\\\nW0131 09:01:06.856742 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 09:01:06.856927 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:06.857621 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2193001133/tls.crt::/tmp/serving-cert-2193001133/tls.key\\\\\\\"\\\\nF0131 09:01:07.202355 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.863020 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.872948 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.881632 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.901647 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.915751 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849206839f7a8bd7f8a6a6fa78eda460866d8039008158f82dcdc4743bd4eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:07Z\\\",\\\"message\\\":\\\"W0131 09:01:06.636539 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 09:01:06.636963 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769850066 cert, and key in /tmp/serving-cert-2193001133/serving-signer.crt, /tmp/serving-cert-2193001133/serving-signer.key\\\\nI0131 09:01:06.851895 1 observer_polling.go:159] Starting file observer\\\\nW0131 09:01:06.856742 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 09:01:06.856927 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:06.857621 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2193001133/tls.crt::/tmp/serving-cert-2193001133/tls.key\\\\\\\"\\\\nF0131 09:01:07.202355 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.927504 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.936452 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.945209 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.953445 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.962203 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:23 crc kubenswrapper[4732]: I0131 09:01:23.971135 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.112168 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.112260 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.112285 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.112362 4732 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.112369 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:01:25.112344289 +0000 UTC m=+23.418220513 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.112398 4732 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.112405 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:25.112396881 +0000 UTC m=+23.418273085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.112462 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:25.112448662 +0000 UTC m=+23.418324876 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.212925 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.213012 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.213167 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.213169 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.213211 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.213237 4732 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.213184 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.213322 4732 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.213328 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:25.213299904 +0000 UTC m=+23.519176138 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.213367 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:25.213354296 +0000 UTC m=+23.519230510 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.495162 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 01:51:34.399556313 +0000 UTC Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.549129 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.550530 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.552924 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.554584 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.556587 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.557362 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.558241 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.559800 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.560994 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.562599 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.563602 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.566062 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.567182 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.568220 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.569805 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.570641 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.572053 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.572753 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.573556 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.576469 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.577225 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.578769 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.579408 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.581197 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.581911 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.582504 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.583684 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.584223 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.585921 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.587047 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.588094 4732 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.588365 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.591073 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.592335 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.593182 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.595900 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.599531 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.600317 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.601959 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.602877 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.603955 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.604912 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.606285 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.607150 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.608213 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.608903 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.609933 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.610649 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.611637 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.612120 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.613049 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.613565 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.614237 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.615140 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.676272 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"73dfb0efcbb71b6296024c863031fb7180dfca4437ee985ef84d063cf78fd984"} Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.680481 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b"} Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.680530 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d"} Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.680552 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c78415f5f73bd74f106115622a3e6ff0efd80249c0ec92e76df9492aa9376025"} Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.682407 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60"} Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.682444 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"efee6db2c1ab7fee455d0c74ac27cef0a722fee52555e5f007cb579a2e59ddbb"} Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.686341 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.695012 4732 scope.go:117] "RemoveContainer" containerID="c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e" Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.695289 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 09:01:24 crc kubenswrapper[4732]: E0131 09:01:24.705967 4732 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.723002 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.738880 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849206839f7a8bd7f8a6a6fa78eda460866d8039008158f82dcdc4743bd4eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:07Z\\\",\\\"message\\\":\\\"W0131 09:01:06.636539 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 09:01:06.636963 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769850066 cert, and key in /tmp/serving-cert-2193001133/serving-signer.crt, /tmp/serving-cert-2193001133/serving-signer.key\\\\nI0131 09:01:06.851895 1 observer_polling.go:159] Starting file observer\\\\nW0131 09:01:06.856742 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 09:01:06.856927 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:06.857621 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2193001133/tls.crt::/tmp/serving-cert-2193001133/tls.key\\\\\\\"\\\\nF0131 09:01:07.202355 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.758843 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.779790 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.799860 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.817385 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.834520 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.847707 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.871359 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.888623 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.909175 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.924088 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.938761 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.952501 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.965899 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:24 crc kubenswrapper[4732]: I0131 09:01:24.979294 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:24Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:25 crc kubenswrapper[4732]: I0131 09:01:25.124387 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:25 crc kubenswrapper[4732]: I0131 09:01:25.124558 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.124617 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:01:27.124582608 +0000 UTC m=+25.430458832 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:25 crc kubenswrapper[4732]: I0131 09:01:25.124683 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.124748 4732 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.124784 4732 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.124816 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:27.124797595 +0000 UTC m=+25.430673839 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.124840 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:27.124827416 +0000 UTC m=+25.430703630 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:25 crc kubenswrapper[4732]: I0131 09:01:25.225995 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:25 crc kubenswrapper[4732]: I0131 09:01:25.226074 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.226187 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.226232 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.226252 4732 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.226193 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.226325 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:27.226299669 +0000 UTC m=+25.532175893 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.226343 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.226359 4732 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.226399 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:27.226385592 +0000 UTC m=+25.532261816 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:25 crc kubenswrapper[4732]: I0131 09:01:25.495614 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:31:09.62740828 +0000 UTC Jan 31 09:01:25 crc kubenswrapper[4732]: I0131 09:01:25.542499 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:25 crc kubenswrapper[4732]: I0131 09:01:25.542616 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.542690 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:25 crc kubenswrapper[4732]: I0131 09:01:25.542618 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.542811 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:25 crc kubenswrapper[4732]: E0131 09:01:25.543010 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:26 crc kubenswrapper[4732]: I0131 09:01:26.496611 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:45:47.948949585 +0000 UTC Jan 31 09:01:26 crc kubenswrapper[4732]: I0131 09:01:26.702903 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217"} Jan 31 09:01:26 crc kubenswrapper[4732]: I0131 09:01:26.728125 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4732]: I0131 09:01:26.750553 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4732]: I0131 09:01:26.778751 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4732]: I0131 09:01:26.800387 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4732]: I0131 09:01:26.817357 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4732]: I0131 09:01:26.832563 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4732]: I0131 09:01:26.844879 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:26 crc kubenswrapper[4732]: I0131 09:01:26.857530 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:26Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.059224 4732 csr.go:261] certificate signing request csr-xgd9j is approved, waiting to be issued Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.102860 4732 csr.go:257] certificate signing request csr-xgd9j is issued Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.132071 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-nsgpk"] Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.132493 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nsgpk" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.135274 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.135487 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.143330 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.143486 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:01:31.143448511 +0000 UTC m=+29.449324725 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.143576 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.143623 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.143754 4732 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.143804 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:31.143795863 +0000 UTC m=+29.449672067 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.143818 4732 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.143916 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:31.143888246 +0000 UTC m=+29.449764520 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.148314 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.174494 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.198776 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.225067 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.232595 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bllbs"] Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.232963 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bllbs" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.235107 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.235337 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.235465 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.235587 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.244781 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/533741c8-f72a-4834-ad02-d33fc939e529-hosts-file\") pod \"node-resolver-nsgpk\" (UID: \"533741c8-f72a-4834-ad02-d33fc939e529\") " pod="openshift-dns/node-resolver-nsgpk" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.244843 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.244873 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjff8\" (UniqueName: \"kubernetes.io/projected/533741c8-f72a-4834-ad02-d33fc939e529-kube-api-access-gjff8\") pod \"node-resolver-nsgpk\" (UID: \"533741c8-f72a-4834-ad02-d33fc939e529\") " pod="openshift-dns/node-resolver-nsgpk" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.244995 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.245026 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.245049 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.245067 4732 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.245140 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.245148 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:31.24512547 +0000 UTC m=+29.551001754 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.245161 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.245173 4732 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.245221 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:31.245203743 +0000 UTC m=+29.551079947 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.245823 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.255491 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.265072 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.265036 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.280930 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.295008 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.307308 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.324930 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.327902 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.336676 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.346049 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjff8\" (UniqueName: \"kubernetes.io/projected/533741c8-f72a-4834-ad02-d33fc939e529-kube-api-access-gjff8\") pod \"node-resolver-nsgpk\" (UID: \"533741c8-f72a-4834-ad02-d33fc939e529\") " pod="openshift-dns/node-resolver-nsgpk" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.346140 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80d87332-eaea-4007-a03e-a9a0f744563a-host\") pod \"node-ca-bllbs\" (UID: \"80d87332-eaea-4007-a03e-a9a0f744563a\") " pod="openshift-image-registry/node-ca-bllbs" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.346191 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/533741c8-f72a-4834-ad02-d33fc939e529-hosts-file\") pod \"node-resolver-nsgpk\" (UID: \"533741c8-f72a-4834-ad02-d33fc939e529\") " pod="openshift-dns/node-resolver-nsgpk" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.346223 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/80d87332-eaea-4007-a03e-a9a0f744563a-serviceca\") pod \"node-ca-bllbs\" (UID: \"80d87332-eaea-4007-a03e-a9a0f744563a\") " pod="openshift-image-registry/node-ca-bllbs" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.346244 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcdpz\" (UniqueName: \"kubernetes.io/projected/80d87332-eaea-4007-a03e-a9a0f744563a-kube-api-access-lcdpz\") pod \"node-ca-bllbs\" (UID: \"80d87332-eaea-4007-a03e-a9a0f744563a\") " pod="openshift-image-registry/node-ca-bllbs" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.346369 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/533741c8-f72a-4834-ad02-d33fc939e529-hosts-file\") pod \"node-resolver-nsgpk\" (UID: \"533741c8-f72a-4834-ad02-d33fc939e529\") " pod="openshift-dns/node-resolver-nsgpk" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.350151 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.360857 4732 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.361481 4732 scope.go:117] "RemoveContainer" containerID="c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e" Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.361629 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.366576 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjff8\" (UniqueName: \"kubernetes.io/projected/533741c8-f72a-4834-ad02-d33fc939e529-kube-api-access-gjff8\") pod \"node-resolver-nsgpk\" (UID: \"533741c8-f72a-4834-ad02-d33fc939e529\") " pod="openshift-dns/node-resolver-nsgpk" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.366651 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.379041 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.390263 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.404176 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.420370 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.433213 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.446217 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nsgpk" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.447291 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcdpz\" (UniqueName: \"kubernetes.io/projected/80d87332-eaea-4007-a03e-a9a0f744563a-kube-api-access-lcdpz\") pod \"node-ca-bllbs\" (UID: \"80d87332-eaea-4007-a03e-a9a0f744563a\") " pod="openshift-image-registry/node-ca-bllbs" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.447338 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/80d87332-eaea-4007-a03e-a9a0f744563a-serviceca\") pod \"node-ca-bllbs\" (UID: \"80d87332-eaea-4007-a03e-a9a0f744563a\") " pod="openshift-image-registry/node-ca-bllbs" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.447362 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80d87332-eaea-4007-a03e-a9a0f744563a-host\") pod \"node-ca-bllbs\" (UID: \"80d87332-eaea-4007-a03e-a9a0f744563a\") " pod="openshift-image-registry/node-ca-bllbs" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.447433 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80d87332-eaea-4007-a03e-a9a0f744563a-host\") pod \"node-ca-bllbs\" (UID: \"80d87332-eaea-4007-a03e-a9a0f744563a\") " pod="openshift-image-registry/node-ca-bllbs" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.448512 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/80d87332-eaea-4007-a03e-a9a0f744563a-serviceca\") pod \"node-ca-bllbs\" (UID: \"80d87332-eaea-4007-a03e-a9a0f744563a\") " pod="openshift-image-registry/node-ca-bllbs" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.452942 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: W0131 09:01:27.462560 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod533741c8_f72a_4834_ad02_d33fc939e529.slice/crio-1523f9870b55ac1750601764c91c46fa77b171d3affc29d5f0a1d7cc19c30178 WatchSource:0}: Error finding container 1523f9870b55ac1750601764c91c46fa77b171d3affc29d5f0a1d7cc19c30178: Status 404 returned error can't find the container with id 1523f9870b55ac1750601764c91c46fa77b171d3affc29d5f0a1d7cc19c30178 Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.470221 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcdpz\" (UniqueName: \"kubernetes.io/projected/80d87332-eaea-4007-a03e-a9a0f744563a-kube-api-access-lcdpz\") pod \"node-ca-bllbs\" (UID: \"80d87332-eaea-4007-a03e-a9a0f744563a\") " pod="openshift-image-registry/node-ca-bllbs" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.485626 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.496803 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:55:24.356252728 +0000 UTC Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.542664 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.542719 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.542790 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.542887 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.542962 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:27 crc kubenswrapper[4732]: E0131 09:01:27.543005 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.545859 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bllbs" Jan 31 09:01:27 crc kubenswrapper[4732]: W0131 09:01:27.565168 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80d87332_eaea_4007_a03e_a9a0f744563a.slice/crio-2e7f32ec790882c6cb1e4412a70561de6f9e5d6d4d4434f6b497ce5cebd7b75c WatchSource:0}: Error finding container 2e7f32ec790882c6cb1e4412a70561de6f9e5d6d4d4434f6b497ce5cebd7b75c: Status 404 returned error can't find the container with id 2e7f32ec790882c6cb1e4412a70561de6f9e5d6d4d4434f6b497ce5cebd7b75c Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.616914 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jnbt8"] Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.617276 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.619969 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.620194 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.620331 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.620377 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.620409 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.634148 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.649004 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2bw2\" (UniqueName: \"kubernetes.io/projected/7d790207-d357-4b47-87bf-5b505e061820-kube-api-access-h2bw2\") pod \"machine-config-daemon-jnbt8\" (UID: \"7d790207-d357-4b47-87bf-5b505e061820\") " pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.649071 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d790207-d357-4b47-87bf-5b505e061820-mcd-auth-proxy-config\") pod \"machine-config-daemon-jnbt8\" (UID: \"7d790207-d357-4b47-87bf-5b505e061820\") " pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.649152 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d790207-d357-4b47-87bf-5b505e061820-proxy-tls\") pod \"machine-config-daemon-jnbt8\" (UID: \"7d790207-d357-4b47-87bf-5b505e061820\") " pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.649343 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7d790207-d357-4b47-87bf-5b505e061820-rootfs\") pod \"machine-config-daemon-jnbt8\" (UID: \"7d790207-d357-4b47-87bf-5b505e061820\") " pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.652647 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.670480 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.684725 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.703266 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.706896 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nsgpk" event={"ID":"533741c8-f72a-4834-ad02-d33fc939e529","Type":"ContainerStarted","Data":"1523f9870b55ac1750601764c91c46fa77b171d3affc29d5f0a1d7cc19c30178"} Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.708036 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bllbs" event={"ID":"80d87332-eaea-4007-a03e-a9a0f744563a","Type":"ContainerStarted","Data":"2e7f32ec790882c6cb1e4412a70561de6f9e5d6d4d4434f6b497ce5cebd7b75c"} Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.718038 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.729792 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.747860 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.750481 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7d790207-d357-4b47-87bf-5b505e061820-rootfs\") pod \"machine-config-daemon-jnbt8\" (UID: \"7d790207-d357-4b47-87bf-5b505e061820\") " pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.750523 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d790207-d357-4b47-87bf-5b505e061820-mcd-auth-proxy-config\") pod \"machine-config-daemon-jnbt8\" (UID: \"7d790207-d357-4b47-87bf-5b505e061820\") " pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.750538 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2bw2\" (UniqueName: \"kubernetes.io/projected/7d790207-d357-4b47-87bf-5b505e061820-kube-api-access-h2bw2\") pod \"machine-config-daemon-jnbt8\" (UID: \"7d790207-d357-4b47-87bf-5b505e061820\") " pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.750579 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d790207-d357-4b47-87bf-5b505e061820-proxy-tls\") pod \"machine-config-daemon-jnbt8\" (UID: \"7d790207-d357-4b47-87bf-5b505e061820\") " pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.751177 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7d790207-d357-4b47-87bf-5b505e061820-rootfs\") pod \"machine-config-daemon-jnbt8\" (UID: \"7d790207-d357-4b47-87bf-5b505e061820\") " pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.751690 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d790207-d357-4b47-87bf-5b505e061820-mcd-auth-proxy-config\") pod \"machine-config-daemon-jnbt8\" (UID: \"7d790207-d357-4b47-87bf-5b505e061820\") " pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.753420 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d790207-d357-4b47-87bf-5b505e061820-proxy-tls\") pod \"machine-config-daemon-jnbt8\" (UID: \"7d790207-d357-4b47-87bf-5b505e061820\") " pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.770789 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.780322 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.795215 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2bw2\" (UniqueName: \"kubernetes.io/projected/7d790207-d357-4b47-87bf-5b505e061820-kube-api-access-h2bw2\") pod \"machine-config-daemon-jnbt8\" (UID: \"7d790207-d357-4b47-87bf-5b505e061820\") " pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.798634 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.811895 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:27 crc kubenswrapper[4732]: I0131 09:01:27.931981 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:01:27 crc kubenswrapper[4732]: W0131 09:01:27.941215 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d790207_d357_4b47_87bf_5b505e061820.slice/crio-a0c0153293b92cb9e29eb17f363783d949abef5081ed954f8b22b5031597406c WatchSource:0}: Error finding container a0c0153293b92cb9e29eb17f363783d949abef5081ed954f8b22b5031597406c: Status 404 returned error can't find the container with id a0c0153293b92cb9e29eb17f363783d949abef5081ed954f8b22b5031597406c Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.024334 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4mxsr"] Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.024674 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-t9kqf"] Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.024955 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.025793 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.026900 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.027336 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.027363 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.027527 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.030006 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.030336 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.030393 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.047926 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.053478 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-var-lib-kubelet\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.053523 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-multus-conf-dir\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.053545 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-run-multus-certs\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.053566 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.053596 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-multus-socket-dir-parent\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.053617 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-etc-kubernetes\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.053637 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.053661 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-run-netns\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.053827 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-var-lib-cni-bin\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.053877 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-os-release\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.053943 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-system-cni-dir\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.054165 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-multus-cni-dir\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.054187 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-cnibin\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.054220 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-os-release\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.054260 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-var-lib-cni-multus\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.054288 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-hostroot\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.054317 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhxt6\" (UniqueName: \"kubernetes.io/projected/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-kube-api-access-jhxt6\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.054360 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8e23192f-14db-41ef-af89-4a76e325d9c1-multus-daemon-config\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.054390 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwsnx\" (UniqueName: \"kubernetes.io/projected/8e23192f-14db-41ef-af89-4a76e325d9c1-kube-api-access-fwsnx\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.054415 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-system-cni-dir\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.054439 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-cnibin\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.054566 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-cni-binary-copy\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.054718 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8e23192f-14db-41ef-af89-4a76e325d9c1-cni-binary-copy\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.054765 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-run-k8s-cni-cncf-io\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.063369 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.076378 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.089961 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.102693 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.104846 4732 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 08:56:27 +0000 UTC, rotation deadline is 2026-11-25 18:04:42.823011329 +0000 UTC Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.104911 4732 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7161h3m14.718103766s for next certificate rotation Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.120251 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.145738 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.156490 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-multus-cni-dir\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.156543 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-cnibin\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.156567 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-os-release\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.156591 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-var-lib-cni-multus\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.156614 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-hostroot\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.156636 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhxt6\" (UniqueName: \"kubernetes.io/projected/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-kube-api-access-jhxt6\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.156677 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8e23192f-14db-41ef-af89-4a76e325d9c1-multus-daemon-config\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.156718 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwsnx\" (UniqueName: \"kubernetes.io/projected/8e23192f-14db-41ef-af89-4a76e325d9c1-kube-api-access-fwsnx\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.156745 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-system-cni-dir\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.156779 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-cnibin\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.156802 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-cni-binary-copy\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.156824 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8e23192f-14db-41ef-af89-4a76e325d9c1-cni-binary-copy\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157029 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-run-k8s-cni-cncf-io\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157065 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-var-lib-kubelet\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157087 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-multus-conf-dir\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157108 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-run-multus-certs\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157131 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157160 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-multus-socket-dir-parent\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157180 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-etc-kubernetes\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157202 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-run-netns\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157222 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-var-lib-cni-bin\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157242 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-os-release\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157264 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157293 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-system-cni-dir\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157487 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-system-cni-dir\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157781 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-multus-cni-dir\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157834 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-run-k8s-cni-cncf-io\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157874 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-cnibin\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157918 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-var-lib-cni-multus\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157963 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-etc-kubernetes\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.158001 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-run-netns\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.157995 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-multus-socket-dir-parent\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.158032 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-var-lib-cni-bin\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.158105 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-var-lib-kubelet\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.158152 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-multus-conf-dir\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.158194 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-host-run-multus-certs\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.158214 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-os-release\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.158264 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8e23192f-14db-41ef-af89-4a76e325d9c1-hostroot\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.158270 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-os-release\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.158341 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-cnibin\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.158371 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-system-cni-dir\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.158854 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.158854 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8e23192f-14db-41ef-af89-4a76e325d9c1-multus-daemon-config\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.159137 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.159160 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-cni-binary-copy\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.159397 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8e23192f-14db-41ef-af89-4a76e325d9c1-cni-binary-copy\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.163413 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.178464 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwsnx\" (UniqueName: \"kubernetes.io/projected/8e23192f-14db-41ef-af89-4a76e325d9c1-kube-api-access-fwsnx\") pod \"multus-4mxsr\" (UID: \"8e23192f-14db-41ef-af89-4a76e325d9c1\") " pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.179729 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhxt6\" (UniqueName: \"kubernetes.io/projected/e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6-kube-api-access-jhxt6\") pod \"multus-additional-cni-plugins-t9kqf\" (UID: \"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\") " pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.180158 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.194034 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.206568 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.220620 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.233945 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.252726 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.269143 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.282550 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.300839 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.315443 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.331935 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.338083 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4mxsr" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.344026 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" Jan 31 09:01:28 crc kubenswrapper[4732]: W0131 09:01:28.355168 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e23192f_14db_41ef_af89_4a76e325d9c1.slice/crio-3bd339dc0405868e024e6906e67c986248a9fad29768487cedd694d3debc9b7b WatchSource:0}: Error finding container 3bd339dc0405868e024e6906e67c986248a9fad29768487cedd694d3debc9b7b: Status 404 returned error can't find the container with id 3bd339dc0405868e024e6906e67c986248a9fad29768487cedd694d3debc9b7b Jan 31 09:01:28 crc kubenswrapper[4732]: W0131 09:01:28.365302 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2e6e0f4_2302_447f_a5e0_7db3d7b73cb6.slice/crio-4fd747eef746ab1ce009e8b557e70bc13c1a62dcdbe707f672f388417fa7df31 WatchSource:0}: Error finding container 4fd747eef746ab1ce009e8b557e70bc13c1a62dcdbe707f672f388417fa7df31: Status 404 returned error can't find the container with id 4fd747eef746ab1ce009e8b557e70bc13c1a62dcdbe707f672f388417fa7df31 Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.385816 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.402329 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8mtkt"] Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.403270 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: W0131 09:01:28.415165 4732 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 31 09:01:28 crc kubenswrapper[4732]: E0131 09:01:28.415219 4732 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.415426 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.415579 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.415666 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.415869 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.415911 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.416151 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.417603 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.431459 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.448197 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.460827 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-var-lib-openvswitch\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.460895 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-run-ovn-kubernetes\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.460942 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-systemd-units\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.460963 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovnkube-script-lib\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.460984 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461011 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-run-netns\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461028 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-slash\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461058 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-ovn\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461078 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-env-overrides\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461098 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-systemd\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461116 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-node-log\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461143 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-log-socket\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461165 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovnkube-config\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461185 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-kubelet\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461207 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-etc-openvswitch\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461229 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-cni-bin\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461259 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-cni-netd\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461280 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jktvz\" (UniqueName: \"kubernetes.io/projected/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-kube-api-access-jktvz\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461300 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovn-node-metrics-cert\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.461324 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-openvswitch\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.462450 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.482789 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.497066 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:13:55.832653555 +0000 UTC Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.499115 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.511350 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.530867 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.547434 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.562683 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-run-netns\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.562739 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-slash\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.562784 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-ovn\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.562807 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-env-overrides\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.562825 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-node-log\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.562844 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-systemd\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.562865 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-log-socket\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.562886 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovnkube-config\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.562907 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-kubelet\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.562932 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-etc-openvswitch\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.562949 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-cni-bin\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.562971 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-cni-netd\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.562987 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jktvz\" (UniqueName: \"kubernetes.io/projected/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-kube-api-access-jktvz\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563010 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-openvswitch\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563027 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovn-node-metrics-cert\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563052 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-var-lib-openvswitch\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563069 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-run-ovn-kubernetes\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563064 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-kubelet\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563105 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-systemd-units\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563132 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovnkube-script-lib\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563152 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563175 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-run-netns\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563008 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-node-log\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563225 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563233 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-etc-openvswitch\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563267 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-var-lib-openvswitch\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563278 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-cni-bin\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563296 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-run-ovn-kubernetes\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563321 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-cni-netd\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563069 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-log-socket\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563362 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-systemd-units\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563108 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-systemd\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563692 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-openvswitch\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563683 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-env-overrides\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.563802 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-slash\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.564135 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovnkube-config\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.564579 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-ovn\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.564950 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovnkube-script-lib\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.569181 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovn-node-metrics-cert\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.581431 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.584665 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jktvz\" (UniqueName: \"kubernetes.io/projected/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-kube-api-access-jktvz\") pod \"ovnkube-node-8mtkt\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.598815 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.613695 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.627447 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.638097 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.647905 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.666368 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.681424 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.693874 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.708667 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.712720 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bllbs" event={"ID":"80d87332-eaea-4007-a03e-a9a0f744563a","Type":"ContainerStarted","Data":"a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0"} Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.714301 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" event={"ID":"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6","Type":"ContainerStarted","Data":"0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014"} Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.714334 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" event={"ID":"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6","Type":"ContainerStarted","Data":"4fd747eef746ab1ce009e8b557e70bc13c1a62dcdbe707f672f388417fa7df31"} Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.716254 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nsgpk" event={"ID":"533741c8-f72a-4834-ad02-d33fc939e529","Type":"ContainerStarted","Data":"ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569"} Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.717656 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4mxsr" event={"ID":"8e23192f-14db-41ef-af89-4a76e325d9c1","Type":"ContainerStarted","Data":"e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56"} Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.717692 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4mxsr" event={"ID":"8e23192f-14db-41ef-af89-4a76e325d9c1","Type":"ContainerStarted","Data":"3bd339dc0405868e024e6906e67c986248a9fad29768487cedd694d3debc9b7b"} Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.719871 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerStarted","Data":"5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7"} Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.720004 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerStarted","Data":"ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5"} Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.720106 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerStarted","Data":"a0c0153293b92cb9e29eb17f363783d949abef5081ed954f8b22b5031597406c"} Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.728316 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.740580 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.751684 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.768579 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.825220 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.840706 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.881320 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.927818 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:28 crc kubenswrapper[4732]: I0131 09:01:28.967226 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:28Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.005454 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.046526 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.087510 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.126342 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.163732 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.207137 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.252126 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.288432 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.324142 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.381474 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.386024 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.498270 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 05:51:44.967293426 +0000 UTC Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.542691 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:29 crc kubenswrapper[4732]: E0131 09:01:29.542858 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.543033 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:29 crc kubenswrapper[4732]: E0131 09:01:29.543208 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.543374 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:29 crc kubenswrapper[4732]: E0131 09:01:29.543451 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.725380 4732 generic.go:334] "Generic (PLEG): container finished" podID="e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6" containerID="0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014" exitCode=0 Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.725478 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" event={"ID":"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6","Type":"ContainerDied","Data":"0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014"} Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.727334 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerStarted","Data":"8ed5be886bc7763adb1d7a0a054a6dd73cde6a707faa32148f1f5ddc889335e4"} Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.746789 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.766910 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.768433 4732 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.770836 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.770879 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.770894 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.771112 4732 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.781673 4732 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.782057 4732 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.783271 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.783305 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.783318 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.783341 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.783357 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.789980 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: E0131 09:01:29.805413 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.810280 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.810323 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.810338 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.810358 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.810373 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.814040 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: E0131 09:01:29.823781 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.828818 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.830463 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.830522 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.830532 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.830554 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.830567 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4732]: E0131 09:01:29.844715 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.846824 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.849074 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.849122 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.849138 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.849161 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.849172 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.863175 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: E0131 09:01:29.864072 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.867408 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.867447 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.867455 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.867470 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.867480 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.878795 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: E0131 09:01:29.882312 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: E0131 09:01:29.882477 4732 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.884433 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.884468 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.884477 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.884493 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.884507 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.891856 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.904493 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.925015 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.944142 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.957434 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.973036 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.988762 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.988810 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.988822 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.988842 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.988859 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:29Z","lastTransitionTime":"2026-01-31T09:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:29 crc kubenswrapper[4732]: I0131 09:01:29.993878 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.091594 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.091941 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.091955 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.091972 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.091982 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.194145 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.194186 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.194197 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.194213 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.194223 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.297118 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.297167 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.297177 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.297195 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.297207 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.399920 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.400261 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.400271 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.400287 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.400296 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.499312 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:47:34.673103183 +0000 UTC Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.502847 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.502895 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.502912 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.502934 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.502948 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.604839 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.604889 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.604900 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.604917 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.604931 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.707825 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.707870 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.707880 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.707896 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.707907 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.733188 4732 generic.go:334] "Generic (PLEG): container finished" podID="e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6" containerID="c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688" exitCode=0 Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.733274 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" event={"ID":"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6","Type":"ContainerDied","Data":"c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688"} Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.737378 4732 generic.go:334] "Generic (PLEG): container finished" podID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerID="cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193" exitCode=0 Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.737428 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193"} Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.756538 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.773065 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.785940 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.802213 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.810799 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.811052 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.811100 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.811125 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.811155 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.817136 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.833787 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.847603 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.873041 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.897152 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.918087 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.918137 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.918151 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.918171 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.918184 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:30Z","lastTransitionTime":"2026-01-31T09:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.928623 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.944959 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.959974 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.981009 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:30 crc kubenswrapper[4732]: I0131 09:01:30.996714 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:30Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.018151 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.020136 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.020171 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.020183 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.020202 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.020214 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.038792 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.058006 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.070354 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.084269 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.099278 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.110495 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.120919 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.122925 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.122971 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.122984 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.123002 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.123014 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.135478 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.156810 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.170323 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.184956 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.201094 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.201263 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.201372 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.201415 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.201491 4732 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.201493 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:01:39.201460931 +0000 UTC m=+37.507337135 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.201558 4732 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.201594 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:39.201564624 +0000 UTC m=+37.507440828 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.201618 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:39.201609766 +0000 UTC m=+37.507486040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.219251 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.224859 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.224896 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.224906 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.224922 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.224931 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.235979 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.246534 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.302961 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.303017 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.303179 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.303200 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.303215 4732 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.303211 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.303256 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.303268 4732 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.303277 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:39.303254913 +0000 UTC m=+37.609131117 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.303329 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:39.303309545 +0000 UTC m=+37.609185749 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.326928 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.326963 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.326973 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.326988 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.326997 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.430030 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.430074 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.430086 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.430104 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.430116 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.500372 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:36:11.116790955 +0000 UTC Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.533521 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.533555 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.533563 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.533582 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.533592 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.541806 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.541857 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.541806 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.541961 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.542017 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:31 crc kubenswrapper[4732]: E0131 09:01:31.542100 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.637106 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.637136 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.637146 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.637161 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.637171 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.743682 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.743747 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.743756 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.743773 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.743784 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.747257 4732 generic.go:334] "Generic (PLEG): container finished" podID="e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6" containerID="85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58" exitCode=0 Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.747337 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" event={"ID":"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6","Type":"ContainerDied","Data":"85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.752534 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerStarted","Data":"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.752596 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerStarted","Data":"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.752611 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerStarted","Data":"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.752625 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerStarted","Data":"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.752636 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerStarted","Data":"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.752648 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerStarted","Data":"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.768448 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.797639 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.814336 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.826998 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.844512 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.846350 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.846392 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.846402 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.846417 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.846426 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.855924 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.865841 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.877282 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.902879 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.917640 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.928620 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.941910 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.948372 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.948419 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.948429 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.948455 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.948470 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:31Z","lastTransitionTime":"2026-01-31T09:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.957977 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.969671 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:31 crc kubenswrapper[4732]: I0131 09:01:31.979792 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:31Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.051855 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.051892 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.051902 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.051921 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.051933 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.153779 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.153820 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.153832 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.153851 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.153863 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.255888 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.255960 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.255972 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.255992 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.256001 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.268376 4732 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.358925 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.358973 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.358990 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.359013 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.359029 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.462330 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.462386 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.462399 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.462420 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.462435 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.501004 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:46:24.768149534 +0000 UTC Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.557632 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.570185 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.570226 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.570238 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.570257 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.570269 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.583776 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.602631 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.617674 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.630839 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.651250 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.664865 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.672219 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.672244 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.672252 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.672267 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.672276 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.677163 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.694452 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.713408 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.730429 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.748772 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.758933 4732 generic.go:334] "Generic (PLEG): container finished" podID="e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6" containerID="111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0" exitCode=0 Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.758986 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" event={"ID":"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6","Type":"ContainerDied","Data":"111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0"} Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.769149 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.774442 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.774488 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.774499 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.774514 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.774524 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.783968 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.799257 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.813687 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.826504 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.841339 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.854272 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.865500 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.876467 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.876492 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.876500 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.876514 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.876523 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.889206 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.907433 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.919310 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.932495 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.944461 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.955818 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.975912 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.981673 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.981724 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.981734 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.981753 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.981765 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:32Z","lastTransitionTime":"2026-01-31T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:32 crc kubenswrapper[4732]: I0131 09:01:32.997762 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:32Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.019598 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.056124 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.085057 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.085114 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.085127 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.085152 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.085169 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.188062 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.188122 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.188135 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.188155 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.188168 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.290990 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.291040 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.291052 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.291073 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.291087 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.393582 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.393636 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.393647 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.393714 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.393727 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.496051 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.496089 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.496098 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.496113 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.496123 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.501473 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:26:06.647242429 +0000 UTC Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.542833 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.542866 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.542930 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:33 crc kubenswrapper[4732]: E0131 09:01:33.543504 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:33 crc kubenswrapper[4732]: E0131 09:01:33.543655 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:33 crc kubenswrapper[4732]: E0131 09:01:33.543902 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.599701 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.599757 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.599769 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.599790 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.599804 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.703108 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.703201 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.703225 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.703254 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.703276 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.767085 4732 generic.go:334] "Generic (PLEG): container finished" podID="e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6" containerID="2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b" exitCode=0 Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.767167 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" event={"ID":"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6","Type":"ContainerDied","Data":"2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b"} Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.772791 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerStarted","Data":"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715"} Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.784028 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.802438 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.806719 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.806767 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.806777 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.806798 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.806809 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.819266 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.834558 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.845812 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.858897 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.887667 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.907667 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.914161 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.914205 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.914215 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.914235 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.914253 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:33Z","lastTransitionTime":"2026-01-31T09:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.922888 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.938444 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.955975 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.972248 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:33 crc kubenswrapper[4732]: I0131 09:01:33.984442 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:33Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.004841 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.017209 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.017249 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.017257 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.017273 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.017282 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.020012 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.119494 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.119532 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.119543 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.119562 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.119573 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.222634 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.222690 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.222701 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.222719 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.222732 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.325710 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.325733 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.325742 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.325755 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.325764 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.428638 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.428697 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.428709 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.428732 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.428745 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.501877 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:23:24.901151788 +0000 UTC Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.530944 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.531023 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.531061 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.531085 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.531107 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.634300 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.634340 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.634349 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.634365 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.634376 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.737569 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.737614 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.737625 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.737645 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.737657 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.780865 4732 generic.go:334] "Generic (PLEG): container finished" podID="e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6" containerID="4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168" exitCode=0 Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.780922 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" event={"ID":"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6","Type":"ContainerDied","Data":"4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168"} Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.797175 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.847582 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.847924 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.847979 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.848000 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.848019 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.848038 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.857344 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.867372 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.886199 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.901032 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.915342 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.927534 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.940532 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.950899 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.950933 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.951220 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.951246 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.951259 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:34Z","lastTransitionTime":"2026-01-31T09:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.952022 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.964973 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.979270 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:34 crc kubenswrapper[4732]: I0131 09:01:34.996269 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:34Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.010831 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.024031 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.053792 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.053837 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.053850 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.053877 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.053905 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.157166 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.157228 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.157246 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.157275 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.157295 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.260847 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.260908 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.260926 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.260960 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.260977 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.363294 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.363355 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.363380 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.363404 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.363420 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.466247 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.466293 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.466302 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.466318 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.466327 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.502536 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:48:32.899578693 +0000 UTC Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.542091 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.542172 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.542274 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:35 crc kubenswrapper[4732]: E0131 09:01:35.542304 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:35 crc kubenswrapper[4732]: E0131 09:01:35.542471 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:35 crc kubenswrapper[4732]: E0131 09:01:35.542737 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.572128 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.572190 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.572208 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.572236 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.572254 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.674794 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.674858 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.674876 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.674902 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.674920 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.778166 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.778222 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.778233 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.778248 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.778257 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.794346 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" event={"ID":"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6","Type":"ContainerStarted","Data":"038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f"} Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.813226 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.826179 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.843465 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.866822 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.880761 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.880802 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.880819 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.880838 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.880850 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.882560 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.897085 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.912975 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.929535 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.942939 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.954212 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.973267 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.984078 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.984119 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.984127 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.984143 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.984153 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:35Z","lastTransitionTime":"2026-01-31T09:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:35 crc kubenswrapper[4732]: I0131 09:01:35.989344 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.001387 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:35Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.011989 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.026963 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.087582 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.087632 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.087642 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.087663 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.087697 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.190532 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.190589 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.190609 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.190634 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.190650 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.293573 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.293620 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.293631 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.293652 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.293683 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.396659 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.396774 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.396793 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.396819 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.396839 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.500598 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.500633 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.500641 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.500657 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.500669 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.503007 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:58:03.978940705 +0000 UTC Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.603117 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.603197 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.603211 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.603227 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.603237 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.706639 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.706701 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.706715 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.706736 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.706747 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.805324 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerStarted","Data":"4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818"} Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.805907 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.806133 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.809798 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.809849 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.809865 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.809889 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.809908 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.818827 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.829742 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.833689 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.838234 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.839525 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.849038 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.865493 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.880780 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.894119 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.906646 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.912387 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.912451 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.912463 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.912484 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.912501 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:36Z","lastTransitionTime":"2026-01-31T09:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.924205 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.935528 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.948124 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.960665 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.976574 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:36 crc kubenswrapper[4732]: I0131 09:01:36.995075 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:36Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.006053 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.015081 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.015114 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.015125 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.015142 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.015155 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.018901 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.030272 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.041391 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.052060 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.061117 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.072144 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.090701 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.104965 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.116101 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.117990 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.118027 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.118037 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.118056 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.118068 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.126885 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.135289 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.151466 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.162512 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.173039 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.182357 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:37Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.220499 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.220534 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.220546 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.220566 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.220577 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.323231 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.323270 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.323282 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.323298 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.323309 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.427022 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.427070 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.427082 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.427101 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.427113 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.503475 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 02:45:28.453289623 +0000 UTC Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.530764 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.530827 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.530854 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.530885 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.530908 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.542281 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.542297 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.542298 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:37 crc kubenswrapper[4732]: E0131 09:01:37.542413 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:37 crc kubenswrapper[4732]: E0131 09:01:37.542620 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:37 crc kubenswrapper[4732]: E0131 09:01:37.542794 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.634107 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.634193 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.634208 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.634233 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.634250 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.737508 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.737557 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.737568 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.737587 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.737597 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.809941 4732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.840587 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.840761 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.840780 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.840809 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.840830 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.943368 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.943443 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.943468 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.943500 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:37 crc kubenswrapper[4732]: I0131 09:01:37.943525 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:37Z","lastTransitionTime":"2026-01-31T09:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.046226 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.046272 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.046293 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.046312 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.046324 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.157602 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.157734 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.157766 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.157801 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.157838 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.261174 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.261222 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.261233 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.261253 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.261264 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.363350 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.363385 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.363393 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.363408 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.363417 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.465511 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.465548 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.465558 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.465575 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.465588 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.504223 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:12:05.250027298 +0000 UTC Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.570766 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.570811 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.570823 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.570843 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.570856 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.674183 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.674222 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.674234 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.674250 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.674260 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.777159 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.777208 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.777221 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.777239 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.777252 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.812381 4732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.880179 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.880226 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.880238 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.880257 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.880268 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.983289 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.983332 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.983341 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.983356 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:38 crc kubenswrapper[4732]: I0131 09:01:38.983365 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:38Z","lastTransitionTime":"2026-01-31T09:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.085725 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.085769 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.085782 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.085800 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.085812 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.189054 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.189103 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.189113 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.189131 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.189151 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.286211 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.286356 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:01:55.286327094 +0000 UTC m=+53.592203298 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.286387 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.286451 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.286567 4732 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.286622 4732 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.286631 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:55.286619834 +0000 UTC m=+53.592496038 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.286841 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:55.286745588 +0000 UTC m=+53.592621812 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.292104 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.292188 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.292204 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.292232 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.292249 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.387532 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.387619 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.387853 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.387890 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.387905 4732 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.387991 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:55.387966051 +0000 UTC m=+53.693842265 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.388043 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.388174 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.388211 4732 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.388464 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:55.388377705 +0000 UTC m=+53.694254009 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.395231 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.395314 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.395327 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.395351 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.395364 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.498619 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.498692 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.498709 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.498734 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.498750 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.505208 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:24:26.892910947 +0000 UTC Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.542097 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.542143 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.542284 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.542316 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.542400 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:39 crc kubenswrapper[4732]: E0131 09:01:39.542827 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.543138 4732 scope.go:117] "RemoveContainer" containerID="c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.601733 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.602112 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.602125 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.602144 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.602156 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.704779 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.704825 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.704841 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.704866 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.704887 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.808378 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.808420 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.808430 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.808446 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.808455 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.910786 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.910832 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.910847 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.910868 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:39 crc kubenswrapper[4732]: I0131 09:01:39.910882 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:39Z","lastTransitionTime":"2026-01-31T09:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.013562 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.013597 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.013609 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.013627 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.013641 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.116752 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.116787 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.116799 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.116818 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.116829 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.122163 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.122200 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.122220 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.122237 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.122250 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: E0131 09:01:40.139221 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.143751 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.143806 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.143822 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.143847 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.143864 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: E0131 09:01:40.164419 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.173201 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.173262 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.173274 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.173295 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.173306 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: E0131 09:01:40.185326 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.189189 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.189219 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.189230 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.189247 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.189256 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: E0131 09:01:40.201713 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.205613 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.205651 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.205680 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.205701 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.205712 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: E0131 09:01:40.217715 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: E0131 09:01:40.217872 4732 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.219449 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.219488 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.219500 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.219518 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.219531 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.276608 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk"] Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.277079 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.280979 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.281726 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.292861 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.307205 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.322306 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.322351 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.322365 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.322385 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.322401 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.323641 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.338612 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.353011 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.382734 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.398261 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phqth\" (UniqueName: \"kubernetes.io/projected/0313609d-3507-4db5-a190-9dbf59d73e6e-kube-api-access-phqth\") pod \"ovnkube-control-plane-749d76644c-gchqk\" (UID: \"0313609d-3507-4db5-a190-9dbf59d73e6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.398325 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0313609d-3507-4db5-a190-9dbf59d73e6e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gchqk\" (UID: \"0313609d-3507-4db5-a190-9dbf59d73e6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.398362 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0313609d-3507-4db5-a190-9dbf59d73e6e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gchqk\" (UID: \"0313609d-3507-4db5-a190-9dbf59d73e6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.398414 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0313609d-3507-4db5-a190-9dbf59d73e6e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gchqk\" (UID: \"0313609d-3507-4db5-a190-9dbf59d73e6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.401307 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.415363 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.425724 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.425749 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.425759 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.425774 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.425793 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.426697 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.436275 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.444092 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.461496 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.475640 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.487410 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.499361 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0313609d-3507-4db5-a190-9dbf59d73e6e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gchqk\" (UID: \"0313609d-3507-4db5-a190-9dbf59d73e6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.499399 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phqth\" (UniqueName: \"kubernetes.io/projected/0313609d-3507-4db5-a190-9dbf59d73e6e-kube-api-access-phqth\") pod \"ovnkube-control-plane-749d76644c-gchqk\" (UID: \"0313609d-3507-4db5-a190-9dbf59d73e6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.499427 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0313609d-3507-4db5-a190-9dbf59d73e6e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gchqk\" (UID: \"0313609d-3507-4db5-a190-9dbf59d73e6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.499465 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0313609d-3507-4db5-a190-9dbf59d73e6e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gchqk\" (UID: \"0313609d-3507-4db5-a190-9dbf59d73e6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.499848 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.500062 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0313609d-3507-4db5-a190-9dbf59d73e6e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gchqk\" (UID: \"0313609d-3507-4db5-a190-9dbf59d73e6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.500788 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0313609d-3507-4db5-a190-9dbf59d73e6e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gchqk\" (UID: \"0313609d-3507-4db5-a190-9dbf59d73e6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.505520 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:10:21.579536349 +0000 UTC Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.513252 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0313609d-3507-4db5-a190-9dbf59d73e6e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gchqk\" (UID: \"0313609d-3507-4db5-a190-9dbf59d73e6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.515809 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phqth\" (UniqueName: \"kubernetes.io/projected/0313609d-3507-4db5-a190-9dbf59d73e6e-kube-api-access-phqth\") pod \"ovnkube-control-plane-749d76644c-gchqk\" (UID: \"0313609d-3507-4db5-a190-9dbf59d73e6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.528036 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.528064 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.528073 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.528088 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.528099 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.529150 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.593375 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" Jan 31 09:01:40 crc kubenswrapper[4732]: W0131 09:01:40.610346 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0313609d_3507_4db5_a190_9dbf59d73e6e.slice/crio-9691c933bac02369bc86130d8532c83da0687e2cedb632cfac84d2ae0a8ecdec WatchSource:0}: Error finding container 9691c933bac02369bc86130d8532c83da0687e2cedb632cfac84d2ae0a8ecdec: Status 404 returned error can't find the container with id 9691c933bac02369bc86130d8532c83da0687e2cedb632cfac84d2ae0a8ecdec Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.630436 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.630485 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.630496 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.630517 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.630538 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.733919 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.733954 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.733963 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.733999 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.734009 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.821334 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" event={"ID":"0313609d-3507-4db5-a190-9dbf59d73e6e","Type":"ContainerStarted","Data":"9691c933bac02369bc86130d8532c83da0687e2cedb632cfac84d2ae0a8ecdec"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.823416 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/0.log" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.826800 4732 generic.go:334] "Generic (PLEG): container finished" podID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerID="4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818" exitCode=1 Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.826864 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.827773 4732 scope.go:117] "RemoveContainer" containerID="4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.828947 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.832996 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.833567 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.841613 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.841684 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.841701 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.841723 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.841739 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.848233 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.867637 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.888054 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.906160 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.919524 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.934070 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.944378 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.944419 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.944429 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.944450 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.944461 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:40Z","lastTransitionTime":"2026-01-31T09:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.946279 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.957920 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.975905 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:39Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:39.556996 6044 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:39.557035 6044 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:39.557080 6044 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:39.557086 6044 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:39.557099 6044 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:39.557106 6044 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:39.557121 6044 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:39.557163 6044 factory.go:656] Stopping watch factory\\\\nI0131 09:01:39.557193 6044 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:39.557202 6044 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:39.557208 6044 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 09:01:39.557213 6044 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:39.557219 6044 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:39.557225 6044 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:39.557232 6044 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.987006 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:40 crc kubenswrapper[4732]: I0131 09:01:40.997101 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:40Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.009830 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.022220 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.035111 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.045006 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.046521 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.046560 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.046571 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.046590 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.046602 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.058617 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.073871 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.086374 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.107504 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.139583 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.148729 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.148778 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.148792 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.148818 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.148830 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.157632 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.172904 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.183949 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.196470 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.208168 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.229064 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.241797 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.251220 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.251247 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.251258 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.251273 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.251284 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.254328 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.266171 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.281006 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.290432 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.309137 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:39Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:39.556996 6044 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:39.557035 6044 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:39.557080 6044 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:39.557086 6044 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:39.557099 6044 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:39.557106 6044 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:39.557121 6044 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:39.557163 6044 factory.go:656] Stopping watch factory\\\\nI0131 09:01:39.557193 6044 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:39.557202 6044 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:39.557208 6044 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 09:01:39.557213 6044 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:39.557219 6044 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:39.557225 6044 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:39.557232 6044 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.353503 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.353784 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.353889 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.353971 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.354045 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.456589 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.456654 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.456680 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.456698 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.456711 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.506002 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:43:38.562283942 +0000 UTC Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.541645 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.541688 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:41 crc kubenswrapper[4732]: E0131 09:01:41.541815 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.541844 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:41 crc kubenswrapper[4732]: E0131 09:01:41.542004 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:41 crc kubenswrapper[4732]: E0131 09:01:41.542156 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.559230 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.559259 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.559270 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.559286 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.559298 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.662464 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.662521 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.662532 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.662555 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.662569 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.734449 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7fgvm"] Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.735795 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:41 crc kubenswrapper[4732]: E0131 09:01:41.735899 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.749083 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.765431 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.765502 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.765523 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.765553 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.765575 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.769823 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.786471 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.802613 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.815157 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.815231 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47jtm\" (UniqueName: \"kubernetes.io/projected/3bd29a31-1a47-40da-afc5-6c4423067083-kube-api-access-47jtm\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.826016 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.836727 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" event={"ID":"0313609d-3507-4db5-a190-9dbf59d73e6e","Type":"ContainerStarted","Data":"c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939"} Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.836777 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" event={"ID":"0313609d-3507-4db5-a190-9dbf59d73e6e","Type":"ContainerStarted","Data":"b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76"} Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.838235 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/0.log" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.841087 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerStarted","Data":"9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a"} Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.841205 4732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.853491 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.867189 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.868560 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.868592 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.868603 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.868623 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.868633 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.882430 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.896899 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.909965 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.915726 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47jtm\" (UniqueName: \"kubernetes.io/projected/3bd29a31-1a47-40da-afc5-6c4423067083-kube-api-access-47jtm\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.915813 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:41 crc kubenswrapper[4732]: E0131 09:01:41.915924 4732 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:41 crc kubenswrapper[4732]: E0131 09:01:41.915969 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs podName:3bd29a31-1a47-40da-afc5-6c4423067083 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:42.415954913 +0000 UTC m=+40.721831117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs") pod "network-metrics-daemon-7fgvm" (UID: "3bd29a31-1a47-40da-afc5-6c4423067083") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.922455 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.945095 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:39Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:39.556996 6044 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:39.557035 6044 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:39.557080 6044 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:39.557086 6044 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:39.557099 6044 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:39.557106 6044 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:39.557121 6044 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:39.557163 6044 factory.go:656] Stopping watch factory\\\\nI0131 09:01:39.557193 6044 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:39.557202 6044 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:39.557208 6044 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 09:01:39.557213 6044 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:39.557219 6044 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:39.557225 6044 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:39.557232 6044 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.945579 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47jtm\" (UniqueName: \"kubernetes.io/projected/3bd29a31-1a47-40da-afc5-6c4423067083-kube-api-access-47jtm\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.967369 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.971395 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.971438 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.971451 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.971469 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.971481 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:41Z","lastTransitionTime":"2026-01-31T09:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.984181 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:41 crc kubenswrapper[4732]: I0131 09:01:41.997844 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:41Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.012749 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.023332 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.034216 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.045206 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.069839 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.075439 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.075492 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.075505 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.075528 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.075545 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.086243 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.099640 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.110354 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.126893 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.141093 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.152328 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.164758 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.177921 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.177976 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.177988 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.178009 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.178021 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.186644 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:39Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:39.556996 6044 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:39.557035 6044 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:39.557080 6044 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:39.557086 6044 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:39.557099 6044 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:39.557106 6044 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:39.557121 6044 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:39.557163 6044 factory.go:656] Stopping watch factory\\\\nI0131 09:01:39.557193 6044 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:39.557202 6044 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:39.557208 6044 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 09:01:39.557213 6044 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:39.557219 6044 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:39.557225 6044 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:39.557232 6044 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.201972 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.219985 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.237926 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.256601 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.268953 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.280862 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.280925 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.280938 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.280958 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.280971 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.285082 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.385107 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.385176 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.385198 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.385228 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.385254 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.420929 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:42 crc kubenswrapper[4732]: E0131 09:01:42.421137 4732 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:42 crc kubenswrapper[4732]: E0131 09:01:42.421245 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs podName:3bd29a31-1a47-40da-afc5-6c4423067083 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:43.421220605 +0000 UTC m=+41.727096809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs") pod "network-metrics-daemon-7fgvm" (UID: "3bd29a31-1a47-40da-afc5-6c4423067083") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.488134 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.488191 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.488209 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.488232 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.488248 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.506798 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:52:12.41884144 +0000 UTC Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.558304 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.574307 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.590898 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.590940 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.590955 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.590979 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.590994 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.608274 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.622608 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.636254 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.651687 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.664175 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.677618 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.687648 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.697437 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.697521 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.697638 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.697682 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.697697 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.701325 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.719828 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:39Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:39.556996 6044 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:39.557035 6044 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:39.557080 6044 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:39.557086 6044 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:39.557099 6044 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:39.557106 6044 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:39.557121 6044 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:39.557163 6044 factory.go:656] Stopping watch factory\\\\nI0131 09:01:39.557193 6044 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:39.557202 6044 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:39.557208 6044 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 09:01:39.557213 6044 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:39.557219 6044 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:39.557225 6044 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:39.557232 6044 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.732246 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.746400 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.762937 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.776073 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.785078 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.797251 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.799980 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.800024 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.800035 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.800054 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.800065 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.844402 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/1.log" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.845109 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/0.log" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.847488 4732 generic.go:334] "Generic (PLEG): container finished" podID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerID="9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a" exitCode=1 Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.847539 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a"} Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.847819 4732 scope.go:117] "RemoveContainer" containerID="4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.848326 4732 scope.go:117] "RemoveContainer" containerID="9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a" Jan 31 09:01:42 crc kubenswrapper[4732]: E0131 09:01:42.848575 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.870155 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.888297 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.901284 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.903424 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.903480 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.903498 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.903524 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.903540 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:42Z","lastTransitionTime":"2026-01-31T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.914086 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.943062 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:39Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:39.556996 6044 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:39.557035 6044 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:39.557080 6044 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:39.557086 6044 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:39.557099 6044 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:39.557106 6044 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:39.557121 6044 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:39.557163 6044 factory.go:656] Stopping watch factory\\\\nI0131 09:01:39.557193 6044 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:39.557202 6044 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:39.557208 6044 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 09:01:39.557213 6044 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:39.557219 6044 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:39.557225 6044 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:39.557232 6044 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.958571 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.971756 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:42 crc kubenswrapper[4732]: I0131 09:01:42.986140 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:42Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.003388 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.005978 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.006022 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.006033 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.006050 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.006062 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.015233 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.028586 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.041184 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.053180 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.063743 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.075130 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.094748 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.106682 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.108197 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.108224 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.108238 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.108257 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.108269 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.127087 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:39Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:39.556996 6044 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:39.557035 6044 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:39.557080 6044 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:39.557086 6044 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:39.557099 6044 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:39.557106 6044 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:39.557121 6044 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:39.557163 6044 factory.go:656] Stopping watch factory\\\\nI0131 09:01:39.557193 6044 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:39.557202 6044 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:39.557208 6044 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 09:01:39.557213 6044 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:39.557219 6044 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:39.557225 6044 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:39.557232 6044 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"1 09:01:42.390171 6216 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390285 6216 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390357 6216 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390398 6216 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390856 6216 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:42.390942 6216 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:42.391006 6216 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:42.391068 6216 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 09:01:42.391132 6216 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 09:01:42.391188 6216 factory.go:656] Stopping watch factory\\\\nI0131 09:01:42.391259 6216 ovnkube.go:599] Stopped ovnkube\\\\nI0131 09:01:42.390987 6216 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:42.391113 6216 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 09:01:42.391233 6216 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.141627 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.155488 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.170319 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.207286 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.211097 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.211154 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.211168 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.211186 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.211198 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.251876 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.293680 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.313446 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.313494 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.313504 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.313523 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.313536 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.329086 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.370477 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.413853 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.416000 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.416036 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.416047 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.416065 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.416078 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.431129 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:43 crc kubenswrapper[4732]: E0131 09:01:43.431334 4732 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:43 crc kubenswrapper[4732]: E0131 09:01:43.431409 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs podName:3bd29a31-1a47-40da-afc5-6c4423067083 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:45.431386396 +0000 UTC m=+43.737262620 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs") pod "network-metrics-daemon-7fgvm" (UID: "3bd29a31-1a47-40da-afc5-6c4423067083") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.454383 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.488498 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.507456 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:45:33.891858799 +0000 UTC Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.519458 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.519539 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.519548 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.519565 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.519575 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.533837 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.541868 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.541936 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.542017 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:43 crc kubenswrapper[4732]: E0131 09:01:43.542248 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.542306 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:43 crc kubenswrapper[4732]: E0131 09:01:43.542554 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:01:43 crc kubenswrapper[4732]: E0131 09:01:43.542697 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:43 crc kubenswrapper[4732]: E0131 09:01:43.542822 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.570981 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.606800 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.622104 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.622159 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.622171 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.622190 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.622203 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.649703 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.703507 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:43Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.724364 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.724400 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.724412 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.724431 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.724442 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.826934 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.827020 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.827042 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.827071 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.827091 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.853731 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/1.log" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.930378 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.930695 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.930729 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.930761 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:43 crc kubenswrapper[4732]: I0131 09:01:43.930786 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:43Z","lastTransitionTime":"2026-01-31T09:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.033715 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.033784 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.033798 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.033816 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.033829 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.136312 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.136351 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.136361 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.136379 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.136388 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.239315 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.239393 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.239410 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.239436 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.239454 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.342261 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.342294 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.342303 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.342319 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.342327 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.444790 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.444864 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.444882 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.444901 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.444915 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.508057 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:02:34.262369231 +0000 UTC Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.548764 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.548833 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.548843 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.548863 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.548880 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.653299 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.653357 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.653372 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.653392 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.653405 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.756110 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.756151 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.756163 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.756179 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.756191 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.859093 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.859138 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.859149 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.859167 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.859178 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.962228 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.962279 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.962294 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.962316 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:44 crc kubenswrapper[4732]: I0131 09:01:44.962329 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:44Z","lastTransitionTime":"2026-01-31T09:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.065913 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.065981 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.066005 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.066036 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.066057 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.169984 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.170038 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.170057 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.170083 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.170101 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.272994 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.273330 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.273340 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.273356 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.273365 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.376251 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.376298 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.376309 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.376325 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.376334 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.477582 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:45 crc kubenswrapper[4732]: E0131 09:01:45.477834 4732 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:45 crc kubenswrapper[4732]: E0131 09:01:45.477911 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs podName:3bd29a31-1a47-40da-afc5-6c4423067083 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:49.477885687 +0000 UTC m=+47.783761931 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs") pod "network-metrics-daemon-7fgvm" (UID: "3bd29a31-1a47-40da-afc5-6c4423067083") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.480203 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.480285 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.480303 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.480330 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.480349 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.508301 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:44:58.804525144 +0000 UTC Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.541746 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.541827 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.541783 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.541756 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:45 crc kubenswrapper[4732]: E0131 09:01:45.541935 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:45 crc kubenswrapper[4732]: E0131 09:01:45.542053 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:01:45 crc kubenswrapper[4732]: E0131 09:01:45.542154 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:45 crc kubenswrapper[4732]: E0131 09:01:45.542227 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.583153 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.583201 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.583234 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.583251 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.583286 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.686048 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.686078 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.686087 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.686102 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.686112 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.788640 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.788708 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.788724 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.788742 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.788754 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.891085 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.891125 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.891137 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.891153 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.891166 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.993450 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.993488 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.993498 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.993513 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:45 crc kubenswrapper[4732]: I0131 09:01:45.993522 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:45Z","lastTransitionTime":"2026-01-31T09:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.096628 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.096702 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.096715 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.096736 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.096748 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.204914 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.204958 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.204969 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.204987 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.205000 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.307800 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.307869 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.307883 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.307907 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.307921 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.411228 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.411298 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.411310 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.411336 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.411349 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.509373 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:48:00.323788286 +0000 UTC Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.515012 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.515076 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.515098 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.515127 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.515171 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.618430 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.618539 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.618556 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.618583 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.618600 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.720418 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.720467 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.720482 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.720499 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.720510 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.823921 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.824001 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.824023 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.824055 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.824082 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.927052 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.927105 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.927116 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.927136 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:46 crc kubenswrapper[4732]: I0131 09:01:46.927147 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:46Z","lastTransitionTime":"2026-01-31T09:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.030322 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.030397 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.030410 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.030428 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.030441 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.134006 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.134064 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.134085 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.134109 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.134125 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.238224 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.238298 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.238310 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.238334 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.238351 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.341949 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.341979 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.341988 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.342002 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.342011 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.445061 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.445127 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.445149 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.445179 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.445200 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.510541 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:37:00.768084678 +0000 UTC Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.542251 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.542299 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:47 crc kubenswrapper[4732]: E0131 09:01:47.542455 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.542486 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:47 crc kubenswrapper[4732]: E0131 09:01:47.542574 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.542257 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:47 crc kubenswrapper[4732]: E0131 09:01:47.542708 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:47 crc kubenswrapper[4732]: E0131 09:01:47.542792 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.548411 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.548489 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.548501 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.548567 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.548583 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.651931 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.651990 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.652005 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.652026 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.652044 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.755070 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.755128 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.755143 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.755161 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.755176 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.857130 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.857171 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.857182 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.857200 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.857212 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.960747 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.960822 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.960840 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.960867 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:47 crc kubenswrapper[4732]: I0131 09:01:47.960884 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:47Z","lastTransitionTime":"2026-01-31T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.064096 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.064140 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.064150 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.064165 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.064175 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.169138 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.169201 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.169219 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.169243 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.169261 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.272686 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.272743 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.272760 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.272784 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.272803 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.378167 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.378264 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.378279 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.378299 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.378308 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.480234 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.480277 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.480287 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.480304 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.480316 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.511022 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:50:05.388631879 +0000 UTC Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.584403 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.584444 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.584456 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.584479 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.584490 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.687251 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.687300 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.687311 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.687327 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.687338 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.790418 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.790465 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.790473 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.790492 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.790506 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.893067 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.893122 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.893133 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.893156 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.893171 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.995702 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.995749 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.995765 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.995782 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:48 crc kubenswrapper[4732]: I0131 09:01:48.995792 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:48Z","lastTransitionTime":"2026-01-31T09:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.098854 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.098926 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.098965 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.098996 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.099016 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.201361 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.201404 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.201415 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.201431 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.201442 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.304786 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.305020 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.305032 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.305051 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.305100 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.408075 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.408118 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.408129 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.408146 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.408157 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.511111 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.511154 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.511165 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.511183 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.511196 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.511292 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 09:05:42.246368226 +0000 UTC Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.521231 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:49 crc kubenswrapper[4732]: E0131 09:01:49.521361 4732 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:49 crc kubenswrapper[4732]: E0131 09:01:49.521414 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs podName:3bd29a31-1a47-40da-afc5-6c4423067083 nodeName:}" failed. No retries permitted until 2026-01-31 09:01:57.521400263 +0000 UTC m=+55.827276467 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs") pod "network-metrics-daemon-7fgvm" (UID: "3bd29a31-1a47-40da-afc5-6c4423067083") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.542085 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.542085 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:49 crc kubenswrapper[4732]: E0131 09:01:49.542323 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.542109 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:49 crc kubenswrapper[4732]: E0131 09:01:49.542423 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.542092 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:49 crc kubenswrapper[4732]: E0131 09:01:49.542235 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:49 crc kubenswrapper[4732]: E0131 09:01:49.542484 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.613833 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.613930 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.613946 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.613997 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.614012 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.718645 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.718750 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.718767 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.718792 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.718811 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.822161 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.822200 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.822211 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.822228 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.822240 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.926365 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.926426 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.926445 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.926476 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:49 crc kubenswrapper[4732]: I0131 09:01:49.926496 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:49Z","lastTransitionTime":"2026-01-31T09:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.029116 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.029217 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.029233 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.029250 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.029285 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.132262 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.132303 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.132316 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.132334 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.132350 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.236137 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.236191 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.236202 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.236220 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.236231 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.339154 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.339206 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.339217 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.339238 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.339253 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.444126 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.444237 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.444257 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.445474 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.445550 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.483579 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.483650 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.483694 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.483715 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.483727 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: E0131 09:01:50.496412 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.501043 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.501374 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.501689 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.501823 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.501985 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.511405 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:44:55.737386301 +0000 UTC Jan 31 09:01:50 crc kubenswrapper[4732]: E0131 09:01:50.515013 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.519026 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.519058 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.519069 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.519085 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.519094 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: E0131 09:01:50.531745 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.535429 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.535461 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.535468 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.535481 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.535490 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: E0131 09:01:50.547969 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.551253 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.551287 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.551296 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.551311 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.551323 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: E0131 09:01:50.563827 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:50Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:50 crc kubenswrapper[4732]: E0131 09:01:50.563952 4732 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.565730 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.565800 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.565812 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.565848 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.565861 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.668182 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.668246 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.668268 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.668298 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.668339 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.772137 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.772219 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.772231 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.772252 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.772264 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.876135 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.876191 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.876206 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.876228 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.876243 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.978932 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.978990 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.979004 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.979020 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:50 crc kubenswrapper[4732]: I0131 09:01:50.979050 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:50Z","lastTransitionTime":"2026-01-31T09:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.082534 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.082611 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.082632 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.082697 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.082739 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.185398 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.185462 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.185480 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.185506 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.185525 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.288324 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.288470 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.288486 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.288504 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.288516 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.392027 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.392094 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.392103 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.392123 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.392134 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.496039 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.496100 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.496117 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.496138 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.496155 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.511628 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:55:50.110015678 +0000 UTC Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.542302 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.542384 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:51 crc kubenswrapper[4732]: E0131 09:01:51.542470 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.542496 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.542504 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:51 crc kubenswrapper[4732]: E0131 09:01:51.542627 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:51 crc kubenswrapper[4732]: E0131 09:01:51.542767 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:51 crc kubenswrapper[4732]: E0131 09:01:51.543184 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.598521 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.598573 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.598585 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.598607 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.598622 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.701193 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.701266 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.701277 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.701296 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.701308 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.804380 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.804440 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.804456 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.804480 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.804497 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.907214 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.907259 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.907287 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.907307 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.907317 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:51Z","lastTransitionTime":"2026-01-31T09:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.920926 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.934492 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:51Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.965134 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:51Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:51 crc kubenswrapper[4732]: I0131 09:01:51.983695 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:51Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.002353 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:51Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.010509 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.010572 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.010587 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.010607 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.010621 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.017923 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.033723 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.048512 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.061431 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.073891 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.096188 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:39Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:39.556996 6044 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:39.557035 6044 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:39.557080 6044 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:39.557086 6044 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:39.557099 6044 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:39.557106 6044 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:39.557121 6044 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:39.557163 6044 factory.go:656] Stopping watch factory\\\\nI0131 09:01:39.557193 6044 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:39.557202 6044 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:39.557208 6044 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 09:01:39.557213 6044 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:39.557219 6044 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:39.557225 6044 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:39.557232 6044 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"1 09:01:42.390171 6216 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390285 6216 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390357 6216 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390398 6216 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390856 6216 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:42.390942 6216 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:42.391006 6216 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:42.391068 6216 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 09:01:42.391132 6216 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 09:01:42.391188 6216 factory.go:656] Stopping watch factory\\\\nI0131 09:01:42.391259 6216 ovnkube.go:599] Stopped ovnkube\\\\nI0131 09:01:42.390987 6216 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:42.391113 6216 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 09:01:42.391233 6216 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.114325 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.114707 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.114734 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.114745 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.114763 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.114774 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.127014 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.137380 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.149640 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.158379 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.172579 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.183807 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.217736 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.217780 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.217790 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.217805 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.217814 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.320584 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.320626 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.320635 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.320650 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.320659 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.423380 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.423425 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.423434 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.423455 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.423472 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.512572 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:46:07.409834742 +0000 UTC Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.526477 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.526526 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.526543 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.526568 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.526585 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.574305 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.593363 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.611743 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.626694 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.629730 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.629847 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.629892 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.629920 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.629936 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.637028 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.640428 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.647099 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.657257 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.670467 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.685380 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.705444 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:39Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:39.556996 6044 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:39.557035 6044 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:39.557080 6044 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:39.557086 6044 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:39.557099 6044 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:39.557106 6044 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:39.557121 6044 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:39.557163 6044 factory.go:656] Stopping watch factory\\\\nI0131 09:01:39.557193 6044 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:39.557202 6044 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:39.557208 6044 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 09:01:39.557213 6044 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:39.557219 6044 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:39.557225 6044 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:39.557232 6044 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"1 09:01:42.390171 6216 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390285 6216 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390357 6216 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390398 6216 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390856 6216 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:42.390942 6216 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:42.391006 6216 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:42.391068 6216 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 09:01:42.391132 6216 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 09:01:42.391188 6216 factory.go:656] Stopping watch factory\\\\nI0131 09:01:42.391259 6216 ovnkube.go:599] Stopped ovnkube\\\\nI0131 09:01:42.390987 6216 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:42.391113 6216 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 09:01:42.391233 6216 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.720435 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.732653 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.732736 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.732880 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.732938 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.732961 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.735294 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.750655 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.765397 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.775580 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.788410 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.798535 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.811012 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.823420 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.834159 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.835791 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.835858 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.835872 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.835895 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.835909 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.845820 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.856574 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb621da-40b8-4a07-a7bd-06800007bc59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dadc6987ef6ef2593fa8aa0ec3903dc3cfea907f73fb68dca2c141362577bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e897d26ac7b103c21a9cb176f1a90a11098a9a2f6a4cd28f697a90ee7c9f6f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31769db7f40442ff410b053055e413b11d7dba7d48dfce853ca38e7b9f7595e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.877306 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.888696 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.900803 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.912113 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.924193 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.935157 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.938784 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.938848 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.938862 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.938878 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.938888 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:52Z","lastTransitionTime":"2026-01-31T09:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.944774 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.955460 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.975060 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b67c56c55214b7fc95fc05a876435bf7a9d6e5de9dfc0a3acc0110e6779b818\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:39Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 09:01:39.556996 6044 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:39.557035 6044 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 09:01:39.557080 6044 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 09:01:39.557086 6044 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 09:01:39.557099 6044 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:39.557106 6044 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:39.557121 6044 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 09:01:39.557163 6044 factory.go:656] Stopping watch factory\\\\nI0131 09:01:39.557193 6044 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 09:01:39.557202 6044 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:39.557208 6044 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 09:01:39.557213 6044 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 09:01:39.557219 6044 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 09:01:39.557225 6044 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 09:01:39.557232 6044 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"1 09:01:42.390171 6216 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390285 6216 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390357 6216 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390398 6216 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390856 6216 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:42.390942 6216 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:42.391006 6216 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:42.391068 6216 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 09:01:42.391132 6216 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 09:01:42.391188 6216 factory.go:656] Stopping watch factory\\\\nI0131 09:01:42.391259 6216 ovnkube.go:599] Stopped ovnkube\\\\nI0131 09:01:42.390987 6216 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:42.391113 6216 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 09:01:42.391233 6216 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.988419 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:52 crc kubenswrapper[4732]: I0131 09:01:52.999137 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:52Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.009919 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:53Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.022555 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:53Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.038539 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:53Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.041361 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.041430 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.041452 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.041481 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.041501 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.144343 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.144410 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.144421 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.144441 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.144462 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.248151 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.248239 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.248274 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.248304 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.248326 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.350880 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.350988 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.351713 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.351798 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.351813 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.454641 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.454715 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.454727 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.454746 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.454759 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.513348 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 13:05:17.022091752 +0000 UTC Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.541742 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.541792 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.541809 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.541747 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:53 crc kubenswrapper[4732]: E0131 09:01:53.541918 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:53 crc kubenswrapper[4732]: E0131 09:01:53.542023 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:53 crc kubenswrapper[4732]: E0131 09:01:53.542217 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:01:53 crc kubenswrapper[4732]: E0131 09:01:53.542374 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.557525 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.557570 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.557581 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.557601 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.557613 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.660962 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.661012 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.661024 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.661043 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.661058 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.765251 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.765301 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.765311 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.765328 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.765340 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.868566 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.868633 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.868648 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.868690 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.868707 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.972077 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.972126 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.972138 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.972160 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:53 crc kubenswrapper[4732]: I0131 09:01:53.972173 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:53Z","lastTransitionTime":"2026-01-31T09:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.074553 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.074588 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.074597 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.074613 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.074625 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.177298 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.177329 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.177337 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.177354 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.177363 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.280267 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.280299 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.280309 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.280323 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.280332 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.382867 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.382914 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.382930 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.382949 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.382964 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.485959 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.486009 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.486020 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.486038 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.486050 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.514319 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:37:45.134939385 +0000 UTC Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.588947 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.588992 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.589005 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.589024 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.589035 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.691733 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.691772 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.691782 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.691799 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.691812 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.794158 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.794204 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.794215 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.794235 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.794248 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.896508 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.896564 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.896581 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.896604 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.896621 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.999598 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.999648 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.999679 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.999698 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:54 crc kubenswrapper[4732]: I0131 09:01:54.999709 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:54Z","lastTransitionTime":"2026-01-31T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.103310 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.103373 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.103388 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.103412 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.103427 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.206139 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.206179 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.206188 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.206203 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.206213 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.308627 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.308682 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.308695 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.308711 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.308722 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.385208 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.385347 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.385400 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:02:27.385363088 +0000 UTC m=+85.691239292 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.385439 4732 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.385506 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.385512 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:02:27.385492713 +0000 UTC m=+85.691368997 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.385644 4732 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.385707 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:02:27.38569601 +0000 UTC m=+85.691572214 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.410871 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.410944 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.410970 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.410998 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.411016 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.487055 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.487132 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.487353 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.487376 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.487391 4732 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.487468 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:02:27.487446551 +0000 UTC m=+85.793322755 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.487486 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.487544 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.487567 4732 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.487708 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:02:27.487637848 +0000 UTC m=+85.793514092 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.514380 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.514419 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.514432 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.514449 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.514459 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.514477 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:47:52.671812301 +0000 UTC Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.541972 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.542097 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.542192 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.542252 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.542207 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.542379 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.542523 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:55 crc kubenswrapper[4732]: E0131 09:01:55.542626 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.543692 4732 scope.go:117] "RemoveContainer" containerID="9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.559128 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.571815 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.588309 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.602903 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.615379 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.619354 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.619414 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.619425 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.619439 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.619448 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.629710 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.644260 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.663835 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.677398 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.691071 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.704204 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.717984 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.721691 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.721727 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.721739 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.721755 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.721766 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.731212 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb621da-40b8-4a07-a7bd-06800007bc59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dadc6987ef6ef2593fa8aa0ec3903dc3cfea907f73fb68dca2c141362577bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e897d26ac7b103c21a9cb176f1a90a11098a9a2f6a4cd28f697a90ee7c9f6f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31769db7f40442ff410b053055e413b11d7dba7d48dfce853ca38e7b9f7595e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.743294 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.762721 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"1 09:01:42.390171 6216 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390285 6216 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390357 6216 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390398 6216 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390856 6216 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:42.390942 6216 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:42.391006 6216 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:42.391068 6216 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 09:01:42.391132 6216 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 09:01:42.391188 6216 factory.go:656] Stopping watch factory\\\\nI0131 09:01:42.391259 6216 ovnkube.go:599] Stopped ovnkube\\\\nI0131 09:01:42.390987 6216 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:42.391113 6216 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 09:01:42.391233 6216 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.780008 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.791196 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.800391 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.823652 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.823702 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.823712 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.823728 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.823737 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.893827 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/1.log" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.896638 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerStarted","Data":"da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16"} Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.896816 4732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.908860 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.918413 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb621da-40b8-4a07-a7bd-06800007bc59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dadc6987ef6ef2593fa8aa0ec3903dc3cfea907f73fb68dca2c141362577bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e897d26ac7b103c21a9cb176f1a90a11098a9a2f6a4cd28f697a90ee7c9f6f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31769db7f40442ff410b053055e413b11d7dba7d48dfce853ca38e7b9f7595e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.925647 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.925684 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.925693 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.925708 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.925717 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:55Z","lastTransitionTime":"2026-01-31T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.937144 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.954100 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.969426 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:55 crc kubenswrapper[4732]: I0131 09:01:55.984062 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.000846 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:55Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.018773 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.027880 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.027941 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.027957 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.027984 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.028001 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.035996 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.055295 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.081394 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"1 09:01:42.390171 6216 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390285 6216 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390357 6216 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390398 6216 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390856 6216 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:42.390942 6216 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:42.391006 6216 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:42.391068 6216 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 09:01:42.391132 6216 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 09:01:42.391188 6216 factory.go:656] Stopping watch factory\\\\nI0131 09:01:42.391259 6216 ovnkube.go:599] Stopped ovnkube\\\\nI0131 09:01:42.390987 6216 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:42.391113 6216 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 09:01:42.391233 6216 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.099123 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.111261 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.123616 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.133736 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.133784 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.133799 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.133818 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.133831 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.138508 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.148682 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.161161 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.171410 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.236684 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.236820 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.236831 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.236848 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.236860 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.339368 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.339403 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.339413 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.339428 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.339439 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.442332 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.442374 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.442384 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.442404 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.442416 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.514706 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:45:12.625613424 +0000 UTC Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.545343 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.545406 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.545428 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.545459 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.545480 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.648501 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.648546 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.648554 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.648571 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.648580 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.751751 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.751820 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.751848 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.751876 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.751893 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.854995 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.855077 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.855096 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.855140 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.855180 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.903094 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/2.log" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.903822 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/1.log" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.907392 4732 generic.go:334] "Generic (PLEG): container finished" podID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerID="da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16" exitCode=1 Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.907456 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16"} Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.907514 4732 scope.go:117] "RemoveContainer" containerID="9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.908897 4732 scope.go:117] "RemoveContainer" containerID="da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16" Jan 31 09:01:56 crc kubenswrapper[4732]: E0131 09:01:56.909251 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.939535 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.957007 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.964630 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.964705 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.964721 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.964744 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.964760 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:56Z","lastTransitionTime":"2026-01-31T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.976866 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:56 crc kubenswrapper[4732]: I0131 09:01:56.993047 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:56Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.007251 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.018528 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.029493 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.042320 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb621da-40b8-4a07-a7bd-06800007bc59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dadc6987ef6ef2593fa8aa0ec3903dc3cfea907f73fb68dca2c141362577bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e897d26ac7b103c21a9cb176f1a90a11098a9a2f6a4cd28f697a90ee7c9f6f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31769db7f40442ff410b053055e413b11d7dba7d48dfce853ca38e7b9f7595e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.060759 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.067343 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.067398 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.067413 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.067431 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.067444 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.080560 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.096273 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.109595 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.124185 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.135763 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.145497 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.164716 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9455be46c494ee301fbd246b34b851413405354f423be0f5ae070c7c1bb8263a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"message\\\":\\\"1 09:01:42.390171 6216 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390285 6216 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390357 6216 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390398 6216 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 09:01:42.390856 6216 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 09:01:42.390942 6216 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 09:01:42.391006 6216 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 09:01:42.391068 6216 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 09:01:42.391132 6216 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 09:01:42.391188 6216 factory.go:656] Stopping watch factory\\\\nI0131 09:01:42.391259 6216 ovnkube.go:599] Stopped ovnkube\\\\nI0131 09:01:42.390987 6216 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 09:01:42.391113 6216 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 09:01:42.391233 6216 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:56Z\\\",\\\"message\\\":\\\"31T09:01:56Z is after 2025-08-24T17:21:41Z]\\\\nI0131 09:01:56.451406 6413 services_controller.go:451] Built service openshift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 09:01:56.451418 6413 services_controller.go:452] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451426 6413 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451433 6413 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451434 6413 ser\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.170621 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.170700 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.170711 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.170728 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.170739 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.189389 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.201845 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:57Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.274645 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.274776 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.274798 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.274825 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.274844 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.376988 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.377307 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.377461 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.377594 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.377817 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.480751 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.481730 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.481767 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.481784 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.481795 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.515497 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:52:15.579391554 +0000 UTC Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.541842 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.541892 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:57 crc kubenswrapper[4732]: E0131 09:01:57.542004 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.541862 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.542093 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:57 crc kubenswrapper[4732]: E0131 09:01:57.542213 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:01:57 crc kubenswrapper[4732]: E0131 09:01:57.542331 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:57 crc kubenswrapper[4732]: E0131 09:01:57.542466 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.584328 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.584379 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.584391 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.584409 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.584422 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.610258 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:57 crc kubenswrapper[4732]: E0131 09:01:57.610474 4732 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:57 crc kubenswrapper[4732]: E0131 09:01:57.610648 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs podName:3bd29a31-1a47-40da-afc5-6c4423067083 nodeName:}" failed. No retries permitted until 2026-01-31 09:02:13.61061626 +0000 UTC m=+71.916492504 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs") pod "network-metrics-daemon-7fgvm" (UID: "3bd29a31-1a47-40da-afc5-6c4423067083") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.687771 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.687810 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.687821 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.687839 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.687850 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.790268 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.790701 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.790924 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.791082 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.791207 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.894613 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.894659 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.894712 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.894738 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.894754 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.912861 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/2.log" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.998003 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.998056 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.998068 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.998088 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:57 crc kubenswrapper[4732]: I0131 09:01:57.998101 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:57Z","lastTransitionTime":"2026-01-31T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.101276 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.101328 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.101340 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.101359 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.101373 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.204563 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.204631 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.204648 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.204720 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.204739 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.308604 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.308937 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.308956 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.308979 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.308997 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.412889 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.412960 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.412977 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.413007 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.413034 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.489086 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.490823 4732 scope.go:117] "RemoveContainer" containerID="da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16" Jan 31 09:01:58 crc kubenswrapper[4732]: E0131 09:01:58.491149 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.509942 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.515758 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:38:37.735384677 +0000 UTC Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.516087 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.516123 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.516138 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.516157 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.516170 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.530626 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.552488 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.567977 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb621da-40b8-4a07-a7bd-06800007bc59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dadc6987ef6ef2593fa8aa0ec3903dc3cfea907f73fb68dca2c141362577bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e897d26ac7b103c21a9cb176f1a90a11098a9a2f6a4cd28f697a90ee7c9f6f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31769db7f40442ff410b053055e413b11d7dba7d48dfce853ca38e7b9f7595e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.594885 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.612374 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.618977 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.619020 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.619029 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.619046 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.619058 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.627424 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.640978 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.650621 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.660456 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.678816 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:56Z\\\",\\\"message\\\":\\\"31T09:01:56Z is after 2025-08-24T17:21:41Z]\\\\nI0131 09:01:56.451406 6413 services_controller.go:451] Built service openshift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 09:01:56.451418 6413 services_controller.go:452] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451426 6413 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451433 6413 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451434 6413 ser\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.693487 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.705420 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.720462 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.721562 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.721622 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.721633 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.721649 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.721685 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.735354 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.747014 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.759358 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.774321 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:01:58Z is after 2025-08-24T17:21:41Z" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.824744 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.824792 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.824804 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.824824 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.824835 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.926555 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.926621 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.926639 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.926686 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:58 crc kubenswrapper[4732]: I0131 09:01:58.926709 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:58Z","lastTransitionTime":"2026-01-31T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.030143 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.030181 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.030214 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.030234 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.030248 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.132963 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.133023 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.133040 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.133067 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.133088 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.236259 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.236323 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.236333 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.236367 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.236380 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.339407 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.339447 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.339461 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.339478 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.339489 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.441553 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.441607 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.441617 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.441633 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.441643 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.516414 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:58:02.548775864 +0000 UTC Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.541725 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.541765 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.541738 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.541839 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:01:59 crc kubenswrapper[4732]: E0131 09:01:59.541969 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:01:59 crc kubenswrapper[4732]: E0131 09:01:59.542028 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:01:59 crc kubenswrapper[4732]: E0131 09:01:59.542163 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:01:59 crc kubenswrapper[4732]: E0131 09:01:59.542224 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.543583 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.543614 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.543624 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.543637 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.543647 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.646824 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.646885 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.646897 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.646918 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.646934 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.749489 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.749551 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.749580 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.749596 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.749605 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.853725 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.853786 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.853804 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.853829 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.853847 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.956843 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.956912 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.956931 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.956959 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:01:59 crc kubenswrapper[4732]: I0131 09:01:59.956979 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:01:59Z","lastTransitionTime":"2026-01-31T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.059926 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.060054 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.060124 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.060154 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.060203 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.163878 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.164801 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.164874 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.164899 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.164919 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.268118 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.268574 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.268816 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.269068 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.269240 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.372546 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.372600 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.372613 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.372631 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.372644 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.475760 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.475807 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.475822 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.475848 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.475861 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.517735 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:11:38.944197053 +0000 UTC Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.578914 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.579307 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.579430 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.579574 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.579715 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.683201 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.683259 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.683276 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.683299 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.683317 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.786770 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.786826 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.786843 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.786871 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.786904 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.826335 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.826403 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.826418 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.826443 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.826455 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: E0131 09:02:00.846741 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:00Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.852024 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.852096 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.852117 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.852146 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.852167 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: E0131 09:02:00.870160 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:00Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.876125 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.876193 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.876217 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.876248 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.876273 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: E0131 09:02:00.894698 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:00Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.899758 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.899808 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.899824 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.899843 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.900185 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: E0131 09:02:00.917520 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:00Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.922943 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.922997 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.923020 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.923049 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.923074 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:00 crc kubenswrapper[4732]: E0131 09:02:00.950428 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:00Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:00 crc kubenswrapper[4732]: E0131 09:02:00.950719 4732 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.953012 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.953085 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.953110 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.953144 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:00 crc kubenswrapper[4732]: I0131 09:02:00.953168 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:00Z","lastTransitionTime":"2026-01-31T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.056078 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.056113 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.056120 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.056136 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.056147 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.158965 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.159024 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.159034 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.159051 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.159061 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.261163 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.261227 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.261235 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.261249 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.261259 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.363962 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.364026 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.364047 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.364076 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.364095 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.466530 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.466573 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.466583 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.466602 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.466615 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.518453 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:29:35.63470091 +0000 UTC Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.541841 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.541841 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:01 crc kubenswrapper[4732]: E0131 09:02:01.541978 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.542089 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:01 crc kubenswrapper[4732]: E0131 09:02:01.542176 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.541861 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:01 crc kubenswrapper[4732]: E0131 09:02:01.542271 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:01 crc kubenswrapper[4732]: E0131 09:02:01.542339 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.569354 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.569388 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.569397 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.569413 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.569423 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.672211 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.672262 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.672275 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.672296 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.672336 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.775078 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.775133 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.775146 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.775167 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.775181 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.878019 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.878073 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.878087 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.878103 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.878113 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.980278 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.980606 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.980615 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.980633 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:01 crc kubenswrapper[4732]: I0131 09:02:01.980644 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:01Z","lastTransitionTime":"2026-01-31T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.083936 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.083997 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.084016 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.084037 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.084048 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.186963 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.187061 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.187080 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.187106 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.187123 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.290370 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.291146 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.291268 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.291351 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.291432 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.393554 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.393593 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.393604 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.393621 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.393633 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.496826 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.496879 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.496892 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.496910 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.496923 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.519653 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:21:44.064431942 +0000 UTC Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.554916 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.565764 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.576598 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb621da-40b8-4a07-a7bd-06800007bc59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dadc6987ef6ef2593fa8aa0ec3903dc3cfea907f73fb68dca2c141362577bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e897d26ac7b103c21a9cb176f1a90a11098a9a2f6a4cd28f697a90ee7c9f6f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31769db7f40442ff410b053055e413b11d7dba7d48dfce853ca38e7b9f7595e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.594196 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.600152 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.600192 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.600200 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.600219 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.600229 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.607198 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.619493 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.630889 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.642573 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.654301 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.665742 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.688309 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:56Z\\\",\\\"message\\\":\\\"31T09:01:56Z is after 2025-08-24T17:21:41Z]\\\\nI0131 09:01:56.451406 6413 services_controller.go:451] Built service openshift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 09:01:56.451418 6413 services_controller.go:452] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451426 6413 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451433 6413 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451434 6413 ser\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.701042 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.707231 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.707357 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.707382 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.707408 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.707421 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.719547 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.734466 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.749598 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.763378 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.780160 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.790399 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:02Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.808853 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.808894 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.808906 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.808921 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.808932 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.912025 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.912078 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.912091 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.912111 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:02 crc kubenswrapper[4732]: I0131 09:02:02.912124 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:02Z","lastTransitionTime":"2026-01-31T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.016070 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.016126 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.016139 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.016160 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.016172 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.119048 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.119333 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.119403 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.119845 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.119932 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.222597 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.222635 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.222646 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.222661 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.222683 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.326032 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.327109 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.327207 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.327306 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.327376 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.429914 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.429961 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.429970 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.429989 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.430000 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.520071 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:09:32.861452927 +0000 UTC Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.533547 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.533601 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.533613 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.533641 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.533692 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.542351 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.542402 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.542385 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.542359 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:03 crc kubenswrapper[4732]: E0131 09:02:03.542553 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:03 crc kubenswrapper[4732]: E0131 09:02:03.542701 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:03 crc kubenswrapper[4732]: E0131 09:02:03.542787 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:03 crc kubenswrapper[4732]: E0131 09:02:03.542840 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.636002 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.636034 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.636045 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.636063 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.636075 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.738892 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.738951 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.738967 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.738993 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.739012 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.841404 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.841480 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.841508 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.841537 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.841555 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.943904 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.943968 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.943979 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.943999 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:03 crc kubenswrapper[4732]: I0131 09:02:03.944011 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:03Z","lastTransitionTime":"2026-01-31T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.047005 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.047421 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.047657 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.047944 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.048300 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.152143 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.152196 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.152205 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.152222 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.152233 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.255720 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.255790 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.255812 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.255839 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.255857 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.358520 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.358608 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.358631 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.358708 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.358744 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.461989 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.462056 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.462081 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.462109 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.462131 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.521381 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:13:41.288329444 +0000 UTC Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.565157 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.565229 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.565249 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.565278 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.565302 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.668449 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.668497 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.668507 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.668522 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.668532 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.772450 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.772505 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.772520 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.772542 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.772556 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.875608 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.875946 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.876053 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.876155 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.876252 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.979562 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.979611 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.979626 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.979647 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:04 crc kubenswrapper[4732]: I0131 09:02:04.979713 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:04Z","lastTransitionTime":"2026-01-31T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.082508 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.082556 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.082571 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.082589 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.082600 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.185169 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.185212 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.185227 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.185244 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.185257 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.288054 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.288120 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.288137 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.288162 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.288178 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.391930 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.391965 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.391979 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.391997 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.392008 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.495214 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.495254 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.495267 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.495284 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.495293 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.521607 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:19:56.918615212 +0000 UTC Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.542013 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:05 crc kubenswrapper[4732]: E0131 09:02:05.542159 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.542528 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:05 crc kubenswrapper[4732]: E0131 09:02:05.542576 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.542609 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:05 crc kubenswrapper[4732]: E0131 09:02:05.542647 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.542707 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:05 crc kubenswrapper[4732]: E0131 09:02:05.542761 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.598259 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.598288 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.598297 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.598312 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.598322 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.700544 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.700572 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.700581 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.700595 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.700606 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.803388 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.803424 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.803434 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.803451 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.803461 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.906183 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.906219 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.906228 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.906245 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:05 crc kubenswrapper[4732]: I0131 09:02:05.906256 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:05Z","lastTransitionTime":"2026-01-31T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.009166 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.009202 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.009211 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.009228 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.009237 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.112699 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.112776 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.112791 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.112811 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.112826 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.216175 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.216250 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.216266 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.216292 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.216310 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.319574 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.319654 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.319717 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.319757 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.319779 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.423405 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.423468 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.423486 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.423512 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.423529 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.522806 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:07:28.247750609 +0000 UTC Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.526090 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.526170 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.526220 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.526247 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.526259 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.629273 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.629350 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.629363 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.629382 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.629399 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.732312 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.732362 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.732373 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.732390 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.732407 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.835969 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.836026 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.836046 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.836069 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.836081 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.938976 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.939020 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.939037 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.939054 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:06 crc kubenswrapper[4732]: I0131 09:02:06.939069 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:06Z","lastTransitionTime":"2026-01-31T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.042354 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.042397 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.042407 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.042426 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.042440 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.145874 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.145954 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.145973 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.146003 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.146027 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.249026 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.249073 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.249084 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.249102 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.249113 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.352299 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.352350 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.352358 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.352376 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.352387 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.456102 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.456158 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.456174 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.456193 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.456204 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.523983 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:59:07.340286867 +0000 UTC Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.541695 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.541703 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.541812 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:07 crc kubenswrapper[4732]: E0131 09:02:07.541843 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:07 crc kubenswrapper[4732]: E0131 09:02:07.541895 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.541725 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:07 crc kubenswrapper[4732]: E0131 09:02:07.542089 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:07 crc kubenswrapper[4732]: E0131 09:02:07.542264 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.563448 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.563563 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.563576 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.563596 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.563612 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.666072 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.666112 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.666121 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.666136 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.666146 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.768981 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.769027 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.769040 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.769060 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.769074 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.871352 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.871395 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.871405 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.871422 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.871431 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.974317 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.974361 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.974369 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.974385 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:07 crc kubenswrapper[4732]: I0131 09:02:07.974394 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:07Z","lastTransitionTime":"2026-01-31T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.076763 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.076802 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.076811 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.076826 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.076836 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.178823 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.178855 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.178865 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.178880 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.178890 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.282023 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.282050 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.282060 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.282076 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.282084 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.384802 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.384838 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.384852 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.384869 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.384880 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.486890 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.486926 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.486937 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.486956 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.486968 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.524915 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:55:02.645374706 +0000 UTC Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.590081 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.590118 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.590127 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.590144 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.590156 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.693155 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.693202 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.693212 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.693229 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.693239 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.796054 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.796110 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.796121 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.796141 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.796152 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.898038 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.898087 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.898097 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.898114 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:08 crc kubenswrapper[4732]: I0131 09:02:08.898124 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:08Z","lastTransitionTime":"2026-01-31T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.000728 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.000777 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.000790 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.000809 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.000821 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.103417 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.103478 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.103491 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.103513 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.103529 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.205985 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.206037 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.206055 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.206076 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.206087 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.307813 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.307875 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.307895 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.307921 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.307938 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.410734 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.410778 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.410789 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.410806 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.410821 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.513578 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.513619 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.513629 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.513647 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.513656 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.525022 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:25:12.216525701 +0000 UTC Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.542680 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.542716 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.542683 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.542683 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:09 crc kubenswrapper[4732]: E0131 09:02:09.542829 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:09 crc kubenswrapper[4732]: E0131 09:02:09.542931 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:09 crc kubenswrapper[4732]: E0131 09:02:09.543027 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:09 crc kubenswrapper[4732]: E0131 09:02:09.543105 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.616068 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.616110 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.616119 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.616135 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.616152 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.718966 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.719008 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.719019 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.719035 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.719044 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.821615 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.821676 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.821687 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.821706 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.821721 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.924180 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.924238 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.924250 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.924268 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:09 crc kubenswrapper[4732]: I0131 09:02:09.924277 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:09Z","lastTransitionTime":"2026-01-31T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.029968 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.030019 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.030030 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.030048 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.030061 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.134055 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.134134 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.134151 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.134172 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.134187 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.236836 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.236898 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.236920 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.236950 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.236972 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.340446 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.340506 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.340520 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.340540 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.340555 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.443904 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.443965 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.443979 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.444001 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.444016 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.525288 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:02:50.110647002 +0000 UTC Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.546676 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.546750 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.546764 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.546783 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.546799 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.554376 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.649097 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.649143 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.649155 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.649180 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.649205 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.751409 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.751446 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.751455 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.751471 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.751482 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.854372 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.854435 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.854454 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.854472 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.854483 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.957279 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.957318 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.957329 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.957346 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:10 crc kubenswrapper[4732]: I0131 09:02:10.957358 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:10Z","lastTransitionTime":"2026-01-31T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.060128 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.060155 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.060164 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.060178 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.060187 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.162364 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.162407 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.162421 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.162438 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.162450 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.264541 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.264596 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.264609 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.264628 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.264640 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.292258 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.292529 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.292545 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.292565 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.292578 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: E0131 09:02:11.304850 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.309419 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.309584 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.309672 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.309761 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.309860 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: E0131 09:02:11.323099 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.327178 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.327220 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.327230 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.327248 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.327261 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: E0131 09:02:11.342120 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.346523 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.346553 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.346561 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.346578 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.346587 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: E0131 09:02:11.358410 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.361590 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.361630 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.361641 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.361675 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.361688 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: E0131 09:02:11.371844 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:11Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:11 crc kubenswrapper[4732]: E0131 09:02:11.371953 4732 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.373840 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.373860 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.373871 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.373884 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.373894 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.476910 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.476955 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.476968 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.476985 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.477001 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.525694 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 15:50:43.966548895 +0000 UTC Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.541985 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.542087 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.542185 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:11 crc kubenswrapper[4732]: E0131 09:02:11.542208 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:11 crc kubenswrapper[4732]: E0131 09:02:11.542263 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:11 crc kubenswrapper[4732]: E0131 09:02:11.542320 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.541985 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:11 crc kubenswrapper[4732]: E0131 09:02:11.542810 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.579480 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.579555 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.579569 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.579609 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.579622 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.683641 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.684245 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.684319 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.684511 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.684581 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.787234 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.787269 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.787279 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.787296 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.787305 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.890392 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.890466 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.890484 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.890510 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.890531 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.993399 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.993455 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.993469 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.993490 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:11 crc kubenswrapper[4732]: I0131 09:02:11.993504 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:11Z","lastTransitionTime":"2026-01-31T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.096374 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.096434 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.096446 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.096467 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.096480 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.198973 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.199025 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.199040 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.199059 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.199072 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.301443 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.301484 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.301493 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.301510 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.301521 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.403796 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.403850 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.403859 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.403877 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.403888 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.506079 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.506115 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.506124 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.506139 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.506148 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.526784 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:24:30.225627936 +0000 UTC Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.543454 4732 scope.go:117] "RemoveContainer" containerID="da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16" Jan 31 09:02:12 crc kubenswrapper[4732]: E0131 09:02:12.543842 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.553972 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.568825 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.582799 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.595439 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.609195 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.609238 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.609252 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.609270 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.609284 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.614930 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.628500 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.640547 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.652781 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.669279 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.684972 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.696787 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.708846 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec8fd54-25d6-41f1-9a8b-0e2823c951b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87eb9563b2ca7420cbb70ccd32aa77f2a82c769d57eceacd99a4d94cb1c3a0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.712253 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.712306 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.712317 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.712338 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.712350 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.722398 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb621da-40b8-4a07-a7bd-06800007bc59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dadc6987ef6ef2593fa8aa0ec3903dc3cfea907f73fb68dca2c141362577bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e897d26ac7b103c21a9cb176f1a90a11098a9a2f6a4cd28f697a90ee7c9f6f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31769db7f40442ff410b053055e413b11d7dba7d48dfce853ca38e7b9f7595e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.742133 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.759835 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:56Z\\\",\\\"message\\\":\\\"31T09:01:56Z is after 2025-08-24T17:21:41Z]\\\\nI0131 09:01:56.451406 6413 services_controller.go:451] Built service openshift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 09:01:56.451418 6413 services_controller.go:452] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451426 6413 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451433 6413 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451434 6413 ser\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.773136 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.790125 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.800140 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.810814 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:12Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.814454 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.814490 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.814499 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.814515 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.814526 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.916576 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.916617 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.916626 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.916645 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:12 crc kubenswrapper[4732]: I0131 09:02:12.916656 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:12Z","lastTransitionTime":"2026-01-31T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.018905 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.018949 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.018990 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.019014 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.019026 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.122640 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.122723 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.122737 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.122788 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.122802 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.225816 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.225856 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.225866 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.225880 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.225891 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.328321 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.328387 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.328400 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.328421 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.328433 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.430786 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.430839 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.430851 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.430869 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.430884 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.527232 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:09:45.392045596 +0000 UTC Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.533098 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.533172 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.533208 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.533225 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.533237 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.542650 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.542706 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.542763 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:13 crc kubenswrapper[4732]: E0131 09:02:13.542794 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.542706 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:13 crc kubenswrapper[4732]: E0131 09:02:13.542876 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:13 crc kubenswrapper[4732]: E0131 09:02:13.542932 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:13 crc kubenswrapper[4732]: E0131 09:02:13.543014 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.635423 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.635475 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.635486 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.635504 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.635516 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.680379 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:13 crc kubenswrapper[4732]: E0131 09:02:13.680635 4732 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:02:13 crc kubenswrapper[4732]: E0131 09:02:13.680771 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs podName:3bd29a31-1a47-40da-afc5-6c4423067083 nodeName:}" failed. No retries permitted until 2026-01-31 09:02:45.680741408 +0000 UTC m=+103.986617612 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs") pod "network-metrics-daemon-7fgvm" (UID: "3bd29a31-1a47-40da-afc5-6c4423067083") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.738158 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.738207 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.738222 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.738242 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.738254 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.840269 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.840312 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.840323 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.840340 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.840350 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.943238 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.943289 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.943298 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.943313 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:13 crc kubenswrapper[4732]: I0131 09:02:13.943323 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:13Z","lastTransitionTime":"2026-01-31T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.045519 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.045560 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.045569 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.045584 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.045593 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.148357 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.148412 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.148422 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.148439 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.148456 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.251159 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.251194 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.251203 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.251220 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.251230 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.355422 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.355803 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.355900 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.356377 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.356480 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.459352 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.459393 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.459404 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.459421 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.459433 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.528328 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:34:34.583657141 +0000 UTC Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.561644 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.561741 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.561753 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.561768 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.561780 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.664135 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.664173 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.664184 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.664200 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.664213 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.766254 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.766443 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.766550 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.766695 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.766796 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.868899 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.868939 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.868952 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.868970 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.868982 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.974961 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.974996 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.975004 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.975017 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:14 crc kubenswrapper[4732]: I0131 09:02:14.975027 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:14Z","lastTransitionTime":"2026-01-31T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.077210 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.077235 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.077242 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.077273 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.077282 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.179507 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.179769 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.179884 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.179995 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.180118 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.282917 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.283221 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.283311 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.283379 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.283438 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.386047 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.386414 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.386476 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.386551 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.386609 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.489830 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.490311 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.490438 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.490557 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.490646 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.529505 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:59:17.435104169 +0000 UTC Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.542021 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:15 crc kubenswrapper[4732]: E0131 09:02:15.542202 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.542441 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.542600 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.542676 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:15 crc kubenswrapper[4732]: E0131 09:02:15.542627 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:15 crc kubenswrapper[4732]: E0131 09:02:15.542858 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:15 crc kubenswrapper[4732]: E0131 09:02:15.543016 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.593379 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.593715 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.593816 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.593897 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.593976 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.697242 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.697292 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.697304 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.697332 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.697344 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.800629 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.801080 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.801219 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.801386 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.801541 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.904561 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.904608 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.904624 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.904651 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.904700 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:15Z","lastTransitionTime":"2026-01-31T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.981737 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4mxsr_8e23192f-14db-41ef-af89-4a76e325d9c1/kube-multus/0.log" Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.981829 4732 generic.go:334] "Generic (PLEG): container finished" podID="8e23192f-14db-41ef-af89-4a76e325d9c1" containerID="e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56" exitCode=1 Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.981889 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4mxsr" event={"ID":"8e23192f-14db-41ef-af89-4a76e325d9c1","Type":"ContainerDied","Data":"e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56"} Jan 31 09:02:15 crc kubenswrapper[4732]: I0131 09:02:15.982645 4732 scope.go:117] "RemoveContainer" containerID="e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.000400 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:15Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.006938 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.006965 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.006972 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.006987 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.006998 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.017954 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.030188 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.042393 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.077915 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:56Z\\\",\\\"message\\\":\\\"31T09:01:56Z is after 2025-08-24T17:21:41Z]\\\\nI0131 09:01:56.451406 6413 services_controller.go:451] Built service openshift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 09:01:56.451418 6413 services_controller.go:452] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451426 6413 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451433 6413 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451434 6413 ser\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.093004 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.105883 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.110427 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.110503 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.110516 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.110562 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.110576 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.121002 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:15Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_648046fe-98ce-4b13-a3af-42227ab719e4\\\\n2026-01-31T09:01:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_648046fe-98ce-4b13-a3af-42227ab719e4 to /host/opt/cni/bin/\\\\n2026-01-31T09:01:30Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:30Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.135341 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.150534 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.165360 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.177292 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.190260 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.203719 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.212989 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.213044 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.213058 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.213077 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.213094 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.215948 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.226225 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec8fd54-25d6-41f1-9a8b-0e2823c951b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87eb9563b2ca7420cbb70ccd32aa77f2a82c769d57eceacd99a4d94cb1c3a0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.237088 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb621da-40b8-4a07-a7bd-06800007bc59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dadc6987ef6ef2593fa8aa0ec3903dc3cfea907f73fb68dca2c141362577bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e897d26ac7b103c21a9cb176f1a90a11098a9a2f6a4cd28f697a90ee7c9f6f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31769db7f40442ff410b053055e413b11d7dba7d48dfce853ca38e7b9f7595e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.259056 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.275004 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:16Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.315594 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.315958 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.316052 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.316137 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.316216 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.418161 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.418397 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.418492 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.418565 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.418635 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.521885 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.522203 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.522342 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.522521 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.522754 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.530034 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:01:35.77226994 +0000 UTC Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.626086 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.626753 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.626961 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.627198 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.627411 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.730327 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.730369 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.730379 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.730398 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.730407 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.833743 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.833789 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.833800 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.833822 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.833837 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.936785 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.937095 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.937186 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.937283 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.937363 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:16Z","lastTransitionTime":"2026-01-31T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.988765 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4mxsr_8e23192f-14db-41ef-af89-4a76e325d9c1/kube-multus/0.log" Jan 31 09:02:16 crc kubenswrapper[4732]: I0131 09:02:16.988876 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4mxsr" event={"ID":"8e23192f-14db-41ef-af89-4a76e325d9c1","Type":"ContainerStarted","Data":"456969ec8d447fd7f9acd1803b222ebfb16b02c8dee1959dd936eec29aa1d617"} Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.005888 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.018510 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.030984 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.041551 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.041761 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.041827 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.041909 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.041975 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.051288 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:56Z\\\",\\\"message\\\":\\\"31T09:01:56Z is after 2025-08-24T17:21:41Z]\\\\nI0131 09:01:56.451406 6413 services_controller.go:451] Built service openshift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 09:01:56.451418 6413 services_controller.go:452] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451426 6413 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451433 6413 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451434 6413 ser\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.064098 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.075839 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.090836 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://456969ec8d447fd7f9acd1803b222ebfb16b02c8dee1959dd936eec29aa1d617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:15Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_648046fe-98ce-4b13-a3af-42227ab719e4\\\\n2026-01-31T09:01:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_648046fe-98ce-4b13-a3af-42227ab719e4 to /host/opt/cni/bin/\\\\n2026-01-31T09:01:30Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:30Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.105563 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.118851 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.140292 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.144334 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.144371 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.144383 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.144402 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.144414 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.153796 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.166335 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.175492 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec8fd54-25d6-41f1-9a8b-0e2823c951b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87eb9563b2ca7420cbb70ccd32aa77f2a82c769d57eceacd99a4d94cb1c3a0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.185188 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb621da-40b8-4a07-a7bd-06800007bc59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dadc6987ef6ef2593fa8aa0ec3903dc3cfea907f73fb68dca2c141362577bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e897d26ac7b103c21a9cb176f1a90a11098a9a2f6a4cd28f697a90ee7c9f6f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31769db7f40442ff410b053055e413b11d7dba7d48dfce853ca38e7b9f7595e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.203412 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.215981 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.226359 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.236973 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.251225 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:17Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.251796 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.251829 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.251840 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.251863 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.251877 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.354498 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.354543 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.354553 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.354573 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.354585 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.467593 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.467643 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.467654 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.467694 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.467705 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.530704 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:30:23.865734598 +0000 UTC Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.542068 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.542125 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.542179 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.542178 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:17 crc kubenswrapper[4732]: E0131 09:02:17.542975 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:17 crc kubenswrapper[4732]: E0131 09:02:17.542694 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:17 crc kubenswrapper[4732]: E0131 09:02:17.542776 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:17 crc kubenswrapper[4732]: E0131 09:02:17.542471 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.570927 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.570976 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.570992 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.571012 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.571024 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.681549 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.681585 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.681597 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.681616 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.681627 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.784960 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.785058 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.785096 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.785131 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.785153 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.888405 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.888461 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.888472 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.888492 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.888504 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.992046 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.992129 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.992145 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.992163 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:17 crc kubenswrapper[4732]: I0131 09:02:17.992174 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:17Z","lastTransitionTime":"2026-01-31T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.097083 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.097137 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.097155 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.097180 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.097197 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.200378 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.200909 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.200948 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.201048 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.201081 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.305101 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.305158 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.305172 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.305199 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.305216 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.408768 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.408836 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.408850 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.408869 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.408882 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.512460 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.512534 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.512552 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.512579 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.512601 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.530821 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 22:45:19.520000611 +0000 UTC Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.616110 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.616203 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.616225 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.616273 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.616304 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.718850 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.718889 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.718898 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.718913 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.718922 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.822491 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.822540 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.822551 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.822570 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.822585 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.925306 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.925352 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.925364 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.925384 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:18 crc kubenswrapper[4732]: I0131 09:02:18.925400 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:18Z","lastTransitionTime":"2026-01-31T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.028018 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.028079 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.028095 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.028122 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.028141 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.131453 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.131524 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.131543 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.131570 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.131589 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.234435 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.234492 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.234504 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.234522 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.234561 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.337737 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.337796 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.337811 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.337831 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.337845 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.440495 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.440541 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.440551 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.440589 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.440599 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.531716 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:26:17.31606912 +0000 UTC Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.541917 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.541977 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.541977 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.542032 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:19 crc kubenswrapper[4732]: E0131 09:02:19.542188 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:19 crc kubenswrapper[4732]: E0131 09:02:19.542326 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:19 crc kubenswrapper[4732]: E0131 09:02:19.542464 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:19 crc kubenswrapper[4732]: E0131 09:02:19.542604 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.543158 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.543198 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.543212 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.543233 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.543250 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.645628 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.645701 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.645717 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.645737 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.645750 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.748721 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.748766 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.748778 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.748796 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.748808 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.852350 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.852403 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.852415 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.852435 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.852451 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.955404 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.955454 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.955464 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.955483 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:19 crc kubenswrapper[4732]: I0131 09:02:19.955493 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:19Z","lastTransitionTime":"2026-01-31T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.058390 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.058457 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.058477 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.058498 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.058515 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.160710 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.160761 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.160772 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.160790 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.160802 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.263548 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.263599 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.263611 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.263629 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.263643 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.366444 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.366505 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.366516 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.366536 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.366546 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.469163 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.469211 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.469221 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.469239 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.469252 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.532780 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:41:12.233807632 +0000 UTC Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.572508 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.572765 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.572793 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.572827 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.572867 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.676740 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.676785 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.676794 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.676809 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.676819 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.780224 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.780340 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.780368 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.780404 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.780428 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.884190 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.884266 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.884284 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.884311 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.884330 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.987512 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.987553 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.987565 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.987584 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:20 crc kubenswrapper[4732]: I0131 09:02:20.987593 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:20Z","lastTransitionTime":"2026-01-31T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.090485 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.090532 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.090549 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.090570 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.090594 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.194074 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.194121 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.194132 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.194155 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.194172 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.296806 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.297216 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.297326 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.297406 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.297490 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.390070 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.390357 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.390436 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.390569 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.390684 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: E0131 09:02:21.408046 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:21Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.412740 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.412976 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.413048 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.413123 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.413195 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: E0131 09:02:21.426538 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:21Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.431511 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.431569 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.431582 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.431605 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.431618 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: E0131 09:02:21.445113 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:21Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.449181 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.449222 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.449234 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.449251 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.449262 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: E0131 09:02:21.464062 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:21Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.468319 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.468369 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.468380 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.468398 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.468409 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: E0131 09:02:21.481986 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2761c117-4c0c-4c53-891e-fc7b8fbd4017\\\",\\\"systemUUID\\\":\\\"5f9a5b0b-6336-4588-8df8-98fcbdc2a984\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:21Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:21 crc kubenswrapper[4732]: E0131 09:02:21.482414 4732 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.484487 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.484647 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.484813 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.484944 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.485047 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.532969 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:17:26.111811119 +0000 UTC Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.542286 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.542305 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:21 crc kubenswrapper[4732]: E0131 09:02:21.542446 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:21 crc kubenswrapper[4732]: E0131 09:02:21.542466 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.542994 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.543054 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:21 crc kubenswrapper[4732]: E0131 09:02:21.543415 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:21 crc kubenswrapper[4732]: E0131 09:02:21.543524 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.588056 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.588101 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.588111 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.588127 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.588138 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.690777 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.690829 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.690844 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.690863 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.690886 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.793635 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.793699 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.793713 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.793731 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.793743 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.896898 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.896943 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.896954 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.896970 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.896985 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.999378 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.999435 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.999451 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.999473 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:21 crc kubenswrapper[4732]: I0131 09:02:21.999489 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:21Z","lastTransitionTime":"2026-01-31T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.104045 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.104409 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.104492 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.104568 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.104648 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.207338 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.207383 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.207395 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.207412 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.207423 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.309399 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.309442 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.309457 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.309476 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.309490 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.412080 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.412127 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.412137 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.412154 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.412166 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.514820 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.514848 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.514856 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.514871 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.514880 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.533403 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 09:59:09.923233766 +0000 UTC Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.554863 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.566014 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.577987 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.596860 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.607265 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec8fd54-25d6-41f1-9a8b-0e2823c951b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87eb9563b2ca7420cbb70ccd32aa77f2a82c769d57eceacd99a4d94cb1c3a0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.617976 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.618014 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.618026 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.618046 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.618059 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.621147 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb621da-40b8-4a07-a7bd-06800007bc59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dadc6987ef6ef2593fa8aa0ec3903dc3cfea907f73fb68dca2c141362577bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e897d26ac7b103c21a9cb176f1a90a11098a9a2f6a4cd28f697a90ee7c9f6f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31769db7f40442ff410b053055e413b11d7dba7d48dfce853ca38e7b9f7595e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.644208 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.660764 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.677613 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.690274 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.701584 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.713898 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.720451 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.720487 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.720498 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.720515 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.720527 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.727612 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.754819 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:56Z\\\",\\\"message\\\":\\\"31T09:01:56Z is after 2025-08-24T17:21:41Z]\\\\nI0131 09:01:56.451406 6413 services_controller.go:451] Built service openshift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 09:01:56.451418 6413 services_controller.go:452] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451426 6413 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451433 6413 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451434 6413 ser\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.771810 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.783810 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.795596 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://456969ec8d447fd7f9acd1803b222ebfb16b02c8dee1959dd936eec29aa1d617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:15Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_648046fe-98ce-4b13-a3af-42227ab719e4\\\\n2026-01-31T09:01:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_648046fe-98ce-4b13-a3af-42227ab719e4 to /host/opt/cni/bin/\\\\n2026-01-31T09:01:30Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:30Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.811767 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.821523 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:22Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.822796 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.822914 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.822992 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.823098 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.823173 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.924792 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.924848 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.924864 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.924888 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:22 crc kubenswrapper[4732]: I0131 09:02:22.924906 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:22Z","lastTransitionTime":"2026-01-31T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.027123 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.027585 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.027694 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.027804 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.027898 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.130844 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.130895 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.130907 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.130924 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.130937 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.233909 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.233962 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.233975 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.233994 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.234007 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.338236 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.338281 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.338293 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.338312 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.338325 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.441693 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.441762 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.441784 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.441811 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.441830 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.533517 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:11:52.917904119 +0000 UTC Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.541939 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.541985 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.542152 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:23 crc kubenswrapper[4732]: E0131 09:02:23.542390 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.542448 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:23 crc kubenswrapper[4732]: E0131 09:02:23.542625 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:23 crc kubenswrapper[4732]: E0131 09:02:23.542826 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:23 crc kubenswrapper[4732]: E0131 09:02:23.542947 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.549466 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.549510 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.549521 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.549541 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.549554 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.652238 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.652312 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.652337 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.652369 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.652391 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.755078 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.755117 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.755130 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.755146 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.755157 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.858232 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.858277 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.858289 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.858308 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.858322 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.960973 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.961000 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.961008 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.961022 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:23 crc kubenswrapper[4732]: I0131 09:02:23.961031 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:23Z","lastTransitionTime":"2026-01-31T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.063326 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.063367 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.063376 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.063392 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.063402 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.165748 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.165777 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.165785 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.165820 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.165833 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.268835 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.268902 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.268926 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.268957 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.268981 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.372598 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.372695 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.372821 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.372847 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.372867 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.475610 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.475653 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.475695 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.475718 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.475733 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.534438 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:57:44.824440524 +0000 UTC Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.578777 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.578820 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.578830 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.578846 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.578855 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.682428 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.682517 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.682533 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.682557 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.682573 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.785130 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.785195 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.785208 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.785227 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.785241 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.888342 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.888385 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.888396 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.888414 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.888426 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.991250 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.991310 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.991325 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.991348 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:24 crc kubenswrapper[4732]: I0131 09:02:24.991361 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:24Z","lastTransitionTime":"2026-01-31T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.093988 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.094048 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.094061 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.094080 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.094092 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.197660 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.197727 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.197737 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.197756 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.197767 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.300325 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.300409 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.300421 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.300439 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.300451 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.402462 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.402499 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.402509 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.402523 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.402532 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.505003 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.505076 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.505099 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.505130 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.505151 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.534832 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:42:18.0853413 +0000 UTC Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.542162 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.542201 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.542258 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.542169 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:25 crc kubenswrapper[4732]: E0131 09:02:25.542353 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:25 crc kubenswrapper[4732]: E0131 09:02:25.542499 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:25 crc kubenswrapper[4732]: E0131 09:02:25.542612 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:25 crc kubenswrapper[4732]: E0131 09:02:25.542780 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.608094 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.608139 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.608150 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.608167 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.608178 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.711486 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.711545 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.711559 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.711582 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.711596 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.813968 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.814011 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.814022 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.814039 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.814051 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.917567 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.917690 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.917716 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.917747 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:25 crc kubenswrapper[4732]: I0131 09:02:25.917764 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:25Z","lastTransitionTime":"2026-01-31T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.019487 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.019532 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.019544 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.019574 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.019587 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.122406 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.122484 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.122510 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.122542 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.122630 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.229067 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.229109 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.229117 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.229131 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.229141 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.332358 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.332430 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.332450 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.332478 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.332498 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.434784 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.434842 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.434858 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.434881 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.434896 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.535860 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:02:56.16417558 +0000 UTC Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.538686 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.538733 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.538747 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.538766 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.538778 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.543896 4732 scope.go:117] "RemoveContainer" containerID="da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.643384 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.643901 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.643921 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.643947 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.643963 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.747375 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.747442 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.747459 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.747484 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.747503 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.850982 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.851024 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.851041 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.851064 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.851079 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.953497 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.953527 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.953536 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.953549 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:26 crc kubenswrapper[4732]: I0131 09:02:26.953558 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:26Z","lastTransitionTime":"2026-01-31T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.024008 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/2.log" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.026653 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerStarted","Data":"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1"} Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.027173 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.040028 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.053857 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.055582 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.055613 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.055623 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.055641 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.055652 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.081006 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:56Z\\\",\\\"message\\\":\\\"31T09:01:56Z is after 2025-08-24T17:21:41Z]\\\\nI0131 09:01:56.451406 6413 services_controller.go:451] Built service openshift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 09:01:56.451418 6413 services_controller.go:452] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451426 6413 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451433 6413 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451434 6413 ser\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.105314 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.118543 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.132709 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://456969ec8d447fd7f9acd1803b222ebfb16b02c8dee1959dd936eec29aa1d617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:15Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_648046fe-98ce-4b13-a3af-42227ab719e4\\\\n2026-01-31T09:01:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_648046fe-98ce-4b13-a3af-42227ab719e4 to /host/opt/cni/bin/\\\\n2026-01-31T09:01:30Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:30Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.146406 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.155827 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.157486 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.157517 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.157528 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.157546 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.157557 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.167929 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.221860 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.236330 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.249279 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.259739 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.259776 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.259788 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.259805 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.259816 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.263263 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb621da-40b8-4a07-a7bd-06800007bc59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dadc6987ef6ef2593fa8aa0ec3903dc3cfea907f73fb68dca2c141362577bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e897d26ac7b103c21a9cb176f1a90a11098a9a2f6a4cd28f697a90ee7c9f6f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31769db7f40442ff410b053055e413b11d7dba7d48dfce853ca38e7b9f7595e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.281525 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.294128 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.309542 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.322622 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.337105 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.347471 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec8fd54-25d6-41f1-9a8b-0e2823c951b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87eb9563b2ca7420cbb70ccd32aa77f2a82c769d57eceacd99a4d94cb1c3a0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.362173 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.362219 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.362228 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.362243 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.362253 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.433321 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.433430 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.433470 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.433446721 +0000 UTC m=+149.739322925 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.433512 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.433584 4732 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.433679 4732 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.433708 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.433642967 +0000 UTC m=+149.739519181 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.434331 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.433786882 +0000 UTC m=+149.739663086 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.465117 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.465516 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.465582 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.465653 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.465772 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.534411 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.534527 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.534764 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.534793 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.534817 4732 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.534896 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.534869559 +0000 UTC m=+149.840745813 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.535194 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.535294 4732 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.535361 4732 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.535518 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.53549355 +0000 UTC m=+149.841369754 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.537023 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:23:53.138489304 +0000 UTC Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.542417 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.542437 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.542478 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.542592 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.542611 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.542783 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.542716 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:27 crc kubenswrapper[4732]: E0131 09:02:27.543010 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.568459 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.568520 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.568534 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.568554 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.568565 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.671014 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.671060 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.671071 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.671090 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.671102 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.774124 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.774213 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.774251 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.774283 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.774305 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.877912 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.877973 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.877984 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.878016 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.878028 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.981331 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.981795 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.981924 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.982008 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:27 crc kubenswrapper[4732]: I0131 09:02:27.982075 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:27Z","lastTransitionTime":"2026-01-31T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.085793 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.085844 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.085859 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.085878 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.085894 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.189312 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.189375 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.189391 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.189413 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.189428 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.292753 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.292850 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.292899 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.292927 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.292945 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.395887 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.396281 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.396444 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.396588 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.396755 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.499262 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.499302 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.499312 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.499328 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.499338 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.537378 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:52:42.318378592 +0000 UTC Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.602033 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.602102 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.602119 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.602144 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.602162 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.705061 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.705121 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.705136 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.705159 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.705178 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.807902 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.807976 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.807989 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.808019 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.808032 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.910728 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.910792 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.910813 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.910839 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:28 crc kubenswrapper[4732]: I0131 09:02:28.910860 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:28Z","lastTransitionTime":"2026-01-31T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.013848 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.013896 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.013905 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.013923 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.013933 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.034532 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/3.log" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.035143 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/2.log" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.037553 4732 generic.go:334] "Generic (PLEG): container finished" podID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerID="b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1" exitCode=1 Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.037598 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1"} Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.037644 4732 scope.go:117] "RemoveContainer" containerID="da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.038295 4732 scope.go:117] "RemoveContainer" containerID="b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1" Jan 31 09:02:29 crc kubenswrapper[4732]: E0131 09:02:29.038478 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.055974 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c063f211d6d06c3f8b94d316d68da346d077c6e85197754ec7c77ad736fd1217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.068228 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d790207-d357-4b47-87bf-5b505e061820\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5eaa2b856e1dedd7398d0c513f742fa5925c4748bf44eb15022adbb4b62354c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2bw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnbt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.080308 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0313609d-3507-4db5-a190-9dbf59d73e6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b303be38f64266f4e019d30e8b988945133fc76a47e305429ed048d68cdeac76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2d9e1c8db61a2418e981340a7cb999983d4a10e79977507abdf3bdb471939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gchqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.090746 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1add5e75-7e0e-4248-9431-256dc477beeb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7bd726ccae341dfff351d4170276b74d4b6b4f34806b7e1fb23ab517201928d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6678cf179a4e13991aa6a082456b3497a4665e0999912b8002fb84a0b6ca2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d09d44253b8c77894abc55d80843b79509bf7213c0bf8b4d69059d53e76e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.100617 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec8fd54-25d6-41f1-9a8b-0e2823c951b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87eb9563b2ca7420cbb70ccd32aa77f2a82c769d57eceacd99a4d94cb1c3a0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://328c8ed55e178646a8bd3d914985f8171b6413b7b007f3bca609cfb432a227f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.113071 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb621da-40b8-4a07-a7bd-06800007bc59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dadc6987ef6ef2593fa8aa0ec3903dc3cfea907f73fb68dca2c141362577bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e897d26ac7b103c21a9cb176f1a90a11098a9a2f6a4cd28f697a90ee7c9f6f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a31769db7f40442ff410b053055e413b11d7dba7d48dfce853ca38e7b9f7595e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407516efcd2436b964fddea3bdc778826cde289422139bc4577c9ba8c0c43675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.116972 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.117020 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.117032 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.117053 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.117065 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.137511 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58b600ed-9e96-4296-bc41-dda5d2205921\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f14a98662959fe8132486d2f648fb443a407e90160ee8c6eac7e2a3d6835b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771af18934041722ccdbde47137c83e9d93c5e7a07b1e9b26e18354bcd2fda4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2bea5befd94ee4d3b6501273e8b8365102a35cc07b97740fc893f8d70a9d134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fd5264b5a2dcec6f8b363208d5654f2d7199660f04be408ee7ab3913d542e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a43d5e1409abb031f1b950dfc753454b45008067d784642945ea0dfb087e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382f1fa043705a0098e27ea34beeee0b88276aa355a7de07bc29b070949b81e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b16d2732d6d6381045b88446a322622a7b1271f08438cb60e57c05c52ef7faf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273f5652d37e2be568bc388a7f9e96c66d0ce9201678b46736986815bd25d6bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.151606 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62650e77e5971d617a77ae7461d51276fc76e3699dc4f86ebbb9b63ab0d86b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.163055 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7423f7d2a82ae276cff0f5a2730b0de7abbc5b28a8bd8983d23fe44c2e9ca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075971e7f28f40b90caae8b3681dd283860371a6b3f82c92c421a87a27ccb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.178524 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.189583 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.199033 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nsgpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533741c8-f72a-4834-ad02-d33fc939e529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f6f1a4ba8fc3975dc3ef7e86d5b17b0488de7918fe672040f115f640b8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjff8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nsgpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.209801 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bllbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d87332-eaea-4007-a03e-a9a0f744563a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a371d8cbedbc18b4953520c3b135e102db92fc57883d31a8f78c510498ca41b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lcdpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bllbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.219968 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.220531 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.220651 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.220806 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.220921 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.229233 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2d4b479ec06e475664a1f5f1c4c052f87ad374f5616ea477c7a147ca896a16\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:01:56Z\\\",\\\"message\\\":\\\"31T09:01:56Z is after 2025-08-24T17:21:41Z]\\\\nI0131 09:01:56.451406 6413 services_controller.go:451] Built service openshift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 09:01:56.451418 6413 services_controller.go:452] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451426 6413 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451433 6413 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0131 09:01:56.451434 6413 ser\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:28Z\\\",\\\"message\\\":\\\"k=default: []services.lbConfig(nil)\\\\nI0131 09:02:27.497102 6910 services_controller.go:445] Built service openshift-marketplace/certified-operators LB template configs for network=default: []services.lbConfig(nil)\\\\nI0131 09:02:27.496963 6910 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0131 09:02:27.497126 6910 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:27Z is after 2025-08-24T17:21:41Z]\\\\nI0131 09:02:27.497135 6910 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI013\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:02:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jktvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mtkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.243851 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 09:01:18.226954 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 09:01:18.227544 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1858903392/tls.crt::/tmp/serving-cert-1858903392/tls.key\\\\\\\"\\\\nI0131 09:01:23.404020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 09:01:23.413560 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 09:01:23.413612 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 09:01:23.413655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 09:01:23.413760 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 09:01:23.423398 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 09:01:23.423434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 09:01:23.423448 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 09:01:23.423452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 09:01:23.423448 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 09:01:23.423457 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 09:01:23.423476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 09:01:23.426339 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.255196 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.268212 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4mxsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e23192f-14db-41ef-af89-4a76e325d9c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://456969ec8d447fd7f9acd1803b222ebfb16b02c8dee1959dd936eec29aa1d617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T09:02:15Z\\\",\\\"message\\\":\\\"2026-01-31T09:01:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_648046fe-98ce-4b13-a3af-42227ab719e4\\\\n2026-01-31T09:01:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_648046fe-98ce-4b13-a3af-42227ab719e4 to /host/opt/cni/bin/\\\\n2026-01-31T09:01:30Z [verbose] multus-daemon started\\\\n2026-01-31T09:01:30Z [verbose] Readiness Indicator file check\\\\n2026-01-31T09:02:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwsnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4mxsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.285435 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e6e0f4-2302-447f-a5e0-7db3d7b73cb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://038ff30d8a53b94c204f5dc0c72824fcb7f08a423eec2ce05554f23607d4dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c71dc89758bff193a85ceeaa97b21ba6c93389ef9cf380bd0bac89bddddf014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4374e8014d4e67ab8df14785ed6d1bd78b33567550758297fcc04796ea5c688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85484d53f8180c3bc83391004e8a7c6b432af09f04f597567e8ffe0b9c2e3d58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://111b50c40cf72f105ae5c74d668673853a783050a62e89127fa666a6f8d431d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2458247a5e615b929a15e280bb9d5a65c46a991ea2cd83502a8c42b1ca0e299b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a6ff3cd5564994b28608dad1cd0848c771db950208888ac8a2213fa5dc3a168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T09:01:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T09:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhxt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t9kqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.301367 4732 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bd29a31-1a47-40da-afc5-6c4423067083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T09:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47jtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T09:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7fgvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T09:02:29Z is after 2025-08-24T17:21:41Z" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.324416 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.324478 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.324490 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.324511 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.324524 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.426803 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.426847 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.426858 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.426874 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.426885 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.529793 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.529876 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.529897 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.529928 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.529945 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.538042 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:29:29.645079764 +0000 UTC Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.542460 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.542522 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.542614 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.542460 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:29 crc kubenswrapper[4732]: E0131 09:02:29.542716 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:29 crc kubenswrapper[4732]: E0131 09:02:29.542850 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:29 crc kubenswrapper[4732]: E0131 09:02:29.543006 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:29 crc kubenswrapper[4732]: E0131 09:02:29.543084 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.633388 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.633939 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.633950 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.633974 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.633992 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.737146 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.737200 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.737216 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.737238 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.737253 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.840364 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.840418 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.840430 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.840449 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.840461 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.942741 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.942784 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.942793 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.942809 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:29 crc kubenswrapper[4732]: I0131 09:02:29.942820 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:29Z","lastTransitionTime":"2026-01-31T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.044036 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/3.log" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.044520 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.044577 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.044824 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.044854 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.044872 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.149175 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.149239 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.149262 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.149291 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.149312 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.252461 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.252505 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.252516 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.252536 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.252550 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.355697 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.355760 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.355777 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.355802 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.355820 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.459984 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.460057 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.460081 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.460114 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.460139 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.538462 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 10:22:17.819718267 +0000 UTC Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.563351 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.563396 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.563404 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.563420 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.563432 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.670287 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.670347 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.670363 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.670385 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.670402 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.773596 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.773636 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.773646 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.773679 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.773692 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.876946 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.877019 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.877038 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.877069 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.877136 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.980825 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.980899 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.980917 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.980954 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:30 crc kubenswrapper[4732]: I0131 09:02:30.980990 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:30Z","lastTransitionTime":"2026-01-31T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.083972 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.084073 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.084094 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.084128 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.084150 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.187171 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.187219 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.187229 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.187246 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.187256 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.290288 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.290335 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.290346 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.290361 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.290369 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.392844 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.392918 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.392931 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.392950 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.392961 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.496196 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.496263 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.496277 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.496812 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.496867 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.539553 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:45:44.036308111 +0000 UTC Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.542118 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.542116 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.542278 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.542334 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:31 crc kubenswrapper[4732]: E0131 09:02:31.542358 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:31 crc kubenswrapper[4732]: E0131 09:02:31.542483 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:31 crc kubenswrapper[4732]: E0131 09:02:31.542542 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:31 crc kubenswrapper[4732]: E0131 09:02:31.542592 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.603610 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.603642 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.603650 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.603682 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.603695 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.706894 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.706965 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.706990 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.707022 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.707044 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.809008 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.809050 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.809058 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.809073 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.809081 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.840905 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.840968 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.840991 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.841020 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.841042 4732 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T09:02:31Z","lastTransitionTime":"2026-01-31T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.881484 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq"] Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.882100 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.885917 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.887802 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.889543 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.889995 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.914616 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4mxsr" podStartSLOduration=64.914582304 podStartE2EDuration="1m4.914582304s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:31.912377571 +0000 UTC m=+90.218253805" watchObservedRunningTime="2026-01-31 09:02:31.914582304 +0000 UTC m=+90.220458548" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.940436 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-t9kqf" podStartSLOduration=64.940404321 podStartE2EDuration="1m4.940404321s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:31.939565044 +0000 UTC m=+90.245441318" watchObservedRunningTime="2026-01-31 09:02:31.940404321 +0000 UTC m=+90.246280575" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.983273 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d4f59b7e-7610-44a3-ae37-6c095081e3e5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.983361 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d4f59b7e-7610-44a3-ae37-6c095081e3e5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.983447 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f59b7e-7610-44a3-ae37-6c095081e3e5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.983485 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4f59b7e-7610-44a3-ae37-6c095081e3e5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:31 crc kubenswrapper[4732]: I0131 09:02:31.983519 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f59b7e-7610-44a3-ae37-6c095081e3e5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.009988 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.009963794 podStartE2EDuration="1m9.009963794s" podCreationTimestamp="2026-01-31 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:31.992645146 +0000 UTC m=+90.298521370" watchObservedRunningTime="2026-01-31 09:02:32.009963794 +0000 UTC m=+90.315839998" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.081105 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podStartSLOduration=65.081083709 podStartE2EDuration="1m5.081083709s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:32.049790441 +0000 UTC m=+90.355666645" watchObservedRunningTime="2026-01-31 09:02:32.081083709 +0000 UTC m=+90.386959913" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.084546 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f59b7e-7610-44a3-ae37-6c095081e3e5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.084598 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4f59b7e-7610-44a3-ae37-6c095081e3e5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.084638 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f59b7e-7610-44a3-ae37-6c095081e3e5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.084715 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d4f59b7e-7610-44a3-ae37-6c095081e3e5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.084754 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d4f59b7e-7610-44a3-ae37-6c095081e3e5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.084870 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d4f59b7e-7610-44a3-ae37-6c095081e3e5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.086036 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d4f59b7e-7610-44a3-ae37-6c095081e3e5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.086295 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4f59b7e-7610-44a3-ae37-6c095081e3e5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.101961 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f59b7e-7610-44a3-ae37-6c095081e3e5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.109382 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f59b7e-7610-44a3-ae37-6c095081e3e5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xjsnq\" (UID: \"d4f59b7e-7610-44a3-ae37-6c095081e3e5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.119544 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.11952233 podStartE2EDuration="40.11952233s" podCreationTimestamp="2026-01-31 09:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:32.081858973 +0000 UTC m=+90.387735177" watchObservedRunningTime="2026-01-31 09:02:32.11952233 +0000 UTC m=+90.425398534" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.120785 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=69.12077372 podStartE2EDuration="1m9.12077372s" podCreationTimestamp="2026-01-31 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:32.119256521 +0000 UTC m=+90.425132755" watchObservedRunningTime="2026-01-31 09:02:32.12077372 +0000 UTC m=+90.426649924" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.177538 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=65.177510103 podStartE2EDuration="1m5.177510103s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:32.177299206 +0000 UTC m=+90.483175420" watchObservedRunningTime="2026-01-31 09:02:32.177510103 +0000 UTC m=+90.483386307" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.177709 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gchqk" podStartSLOduration=65.177703249 podStartE2EDuration="1m5.177703249s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:32.159656337 +0000 UTC m=+90.465532561" watchObservedRunningTime="2026-01-31 09:02:32.177703249 +0000 UTC m=+90.483579443" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.189633 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=22.18961286 podStartE2EDuration="22.18961286s" podCreationTimestamp="2026-01-31 09:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:32.189187176 +0000 UTC m=+90.495063390" watchObservedRunningTime="2026-01-31 09:02:32.18961286 +0000 UTC m=+90.495489064" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.200148 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-nsgpk" podStartSLOduration=65.200119735 podStartE2EDuration="1m5.200119735s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:32.199682831 +0000 UTC m=+90.505559045" watchObservedRunningTime="2026-01-31 09:02:32.200119735 +0000 UTC m=+90.505995949" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.205203 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.235758 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bllbs" podStartSLOduration=65.235738494 podStartE2EDuration="1m5.235738494s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:32.214535508 +0000 UTC m=+90.520411712" watchObservedRunningTime="2026-01-31 09:02:32.235738494 +0000 UTC m=+90.541614698" Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.540062 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 15:28:32.573923905 +0000 UTC Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.540405 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 09:02:32 crc kubenswrapper[4732]: I0131 09:02:32.547092 4732 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 09:02:33 crc kubenswrapper[4732]: I0131 09:02:33.058855 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" event={"ID":"d4f59b7e-7610-44a3-ae37-6c095081e3e5","Type":"ContainerStarted","Data":"6e2403264654c050102a6e3e43b9cbc23944d62891391025f3d1bf30dbbc5f7f"} Jan 31 09:02:33 crc kubenswrapper[4732]: I0131 09:02:33.058965 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" event={"ID":"d4f59b7e-7610-44a3-ae37-6c095081e3e5","Type":"ContainerStarted","Data":"9de86783807d0f864b334778fdf819f6f83e3fbfcf1762d7db3114c467445704"} Jan 31 09:02:33 crc kubenswrapper[4732]: I0131 09:02:33.085129 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjsnq" podStartSLOduration=66.085102261 podStartE2EDuration="1m6.085102261s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:02:33.080649385 +0000 UTC m=+91.386525589" watchObservedRunningTime="2026-01-31 09:02:33.085102261 +0000 UTC m=+91.390978475" Jan 31 09:02:33 crc kubenswrapper[4732]: I0131 09:02:33.542289 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:33 crc kubenswrapper[4732]: E0131 09:02:33.542444 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:33 crc kubenswrapper[4732]: I0131 09:02:33.542473 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:33 crc kubenswrapper[4732]: I0131 09:02:33.542513 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:33 crc kubenswrapper[4732]: E0131 09:02:33.542606 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:33 crc kubenswrapper[4732]: E0131 09:02:33.542760 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:33 crc kubenswrapper[4732]: I0131 09:02:33.542812 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:33 crc kubenswrapper[4732]: E0131 09:02:33.542936 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:35 crc kubenswrapper[4732]: I0131 09:02:35.542173 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:35 crc kubenswrapper[4732]: I0131 09:02:35.542269 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:35 crc kubenswrapper[4732]: E0131 09:02:35.542322 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:35 crc kubenswrapper[4732]: I0131 09:02:35.542381 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:35 crc kubenswrapper[4732]: E0131 09:02:35.542612 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:35 crc kubenswrapper[4732]: I0131 09:02:35.543035 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:35 crc kubenswrapper[4732]: E0131 09:02:35.543184 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:35 crc kubenswrapper[4732]: E0131 09:02:35.543496 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:37 crc kubenswrapper[4732]: I0131 09:02:37.542139 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:37 crc kubenswrapper[4732]: I0131 09:02:37.542139 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:37 crc kubenswrapper[4732]: E0131 09:02:37.542334 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:37 crc kubenswrapper[4732]: I0131 09:02:37.542269 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:37 crc kubenswrapper[4732]: E0131 09:02:37.542506 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:37 crc kubenswrapper[4732]: E0131 09:02:37.542572 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:37 crc kubenswrapper[4732]: I0131 09:02:37.542824 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:37 crc kubenswrapper[4732]: E0131 09:02:37.542940 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:39 crc kubenswrapper[4732]: I0131 09:02:39.542461 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:39 crc kubenswrapper[4732]: I0131 09:02:39.542536 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:39 crc kubenswrapper[4732]: I0131 09:02:39.542565 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:39 crc kubenswrapper[4732]: I0131 09:02:39.542628 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:39 crc kubenswrapper[4732]: E0131 09:02:39.542892 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:39 crc kubenswrapper[4732]: E0131 09:02:39.542942 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:39 crc kubenswrapper[4732]: E0131 09:02:39.543087 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:39 crc kubenswrapper[4732]: E0131 09:02:39.543237 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:41 crc kubenswrapper[4732]: I0131 09:02:41.542388 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:41 crc kubenswrapper[4732]: I0131 09:02:41.542490 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:41 crc kubenswrapper[4732]: E0131 09:02:41.542540 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:41 crc kubenswrapper[4732]: I0131 09:02:41.542747 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:41 crc kubenswrapper[4732]: E0131 09:02:41.542742 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:41 crc kubenswrapper[4732]: I0131 09:02:41.542784 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:41 crc kubenswrapper[4732]: E0131 09:02:41.542825 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:41 crc kubenswrapper[4732]: E0131 09:02:41.542863 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:43 crc kubenswrapper[4732]: I0131 09:02:43.541885 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:43 crc kubenswrapper[4732]: I0131 09:02:43.541937 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:43 crc kubenswrapper[4732]: I0131 09:02:43.541898 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:43 crc kubenswrapper[4732]: I0131 09:02:43.542015 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:43 crc kubenswrapper[4732]: E0131 09:02:43.542362 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:43 crc kubenswrapper[4732]: E0131 09:02:43.542532 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:43 crc kubenswrapper[4732]: E0131 09:02:43.542599 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:43 crc kubenswrapper[4732]: E0131 09:02:43.542708 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:44 crc kubenswrapper[4732]: I0131 09:02:44.543598 4732 scope.go:117] "RemoveContainer" containerID="b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1" Jan 31 09:02:44 crc kubenswrapper[4732]: E0131 09:02:44.543894 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" Jan 31 09:02:45 crc kubenswrapper[4732]: I0131 09:02:45.541698 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:45 crc kubenswrapper[4732]: I0131 09:02:45.541831 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:45 crc kubenswrapper[4732]: E0131 09:02:45.541928 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:45 crc kubenswrapper[4732]: I0131 09:02:45.541739 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:45 crc kubenswrapper[4732]: E0131 09:02:45.541991 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:45 crc kubenswrapper[4732]: I0131 09:02:45.541739 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:45 crc kubenswrapper[4732]: E0131 09:02:45.542046 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:45 crc kubenswrapper[4732]: E0131 09:02:45.542075 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:45 crc kubenswrapper[4732]: I0131 09:02:45.738406 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:45 crc kubenswrapper[4732]: E0131 09:02:45.738622 4732 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:02:45 crc kubenswrapper[4732]: E0131 09:02:45.738751 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs podName:3bd29a31-1a47-40da-afc5-6c4423067083 nodeName:}" failed. No retries permitted until 2026-01-31 09:03:49.738724886 +0000 UTC m=+168.044601090 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs") pod "network-metrics-daemon-7fgvm" (UID: "3bd29a31-1a47-40da-afc5-6c4423067083") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 09:02:47 crc kubenswrapper[4732]: I0131 09:02:47.541874 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:47 crc kubenswrapper[4732]: I0131 09:02:47.541958 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:47 crc kubenswrapper[4732]: I0131 09:02:47.541916 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:47 crc kubenswrapper[4732]: I0131 09:02:47.541892 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:47 crc kubenswrapper[4732]: E0131 09:02:47.542087 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:47 crc kubenswrapper[4732]: E0131 09:02:47.542312 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:47 crc kubenswrapper[4732]: E0131 09:02:47.542506 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:47 crc kubenswrapper[4732]: E0131 09:02:47.542637 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:49 crc kubenswrapper[4732]: I0131 09:02:49.542816 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:49 crc kubenswrapper[4732]: I0131 09:02:49.542869 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:49 crc kubenswrapper[4732]: I0131 09:02:49.542912 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:49 crc kubenswrapper[4732]: E0131 09:02:49.542971 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:49 crc kubenswrapper[4732]: I0131 09:02:49.542835 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:49 crc kubenswrapper[4732]: E0131 09:02:49.543138 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:49 crc kubenswrapper[4732]: E0131 09:02:49.543291 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:49 crc kubenswrapper[4732]: E0131 09:02:49.543329 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:51 crc kubenswrapper[4732]: I0131 09:02:51.542852 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:51 crc kubenswrapper[4732]: I0131 09:02:51.542901 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:51 crc kubenswrapper[4732]: I0131 09:02:51.542987 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:51 crc kubenswrapper[4732]: I0131 09:02:51.543041 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:51 crc kubenswrapper[4732]: E0131 09:02:51.543099 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:51 crc kubenswrapper[4732]: E0131 09:02:51.543189 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:51 crc kubenswrapper[4732]: E0131 09:02:51.543413 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:51 crc kubenswrapper[4732]: E0131 09:02:51.543556 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:53 crc kubenswrapper[4732]: I0131 09:02:53.542546 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:53 crc kubenswrapper[4732]: I0131 09:02:53.542641 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:53 crc kubenswrapper[4732]: I0131 09:02:53.542641 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:53 crc kubenswrapper[4732]: E0131 09:02:53.542858 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:53 crc kubenswrapper[4732]: I0131 09:02:53.542896 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:53 crc kubenswrapper[4732]: E0131 09:02:53.543210 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:53 crc kubenswrapper[4732]: E0131 09:02:53.543310 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:53 crc kubenswrapper[4732]: E0131 09:02:53.543406 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:55 crc kubenswrapper[4732]: I0131 09:02:55.541840 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:55 crc kubenswrapper[4732]: I0131 09:02:55.541893 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:55 crc kubenswrapper[4732]: I0131 09:02:55.541911 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:55 crc kubenswrapper[4732]: I0131 09:02:55.541988 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:55 crc kubenswrapper[4732]: E0131 09:02:55.542093 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:55 crc kubenswrapper[4732]: E0131 09:02:55.542253 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:55 crc kubenswrapper[4732]: E0131 09:02:55.542428 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:55 crc kubenswrapper[4732]: E0131 09:02:55.542650 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:57 crc kubenswrapper[4732]: I0131 09:02:57.542562 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:57 crc kubenswrapper[4732]: I0131 09:02:57.542613 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:57 crc kubenswrapper[4732]: I0131 09:02:57.542623 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:57 crc kubenswrapper[4732]: I0131 09:02:57.542562 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:57 crc kubenswrapper[4732]: E0131 09:02:57.542736 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:57 crc kubenswrapper[4732]: E0131 09:02:57.542885 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:57 crc kubenswrapper[4732]: E0131 09:02:57.542992 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:57 crc kubenswrapper[4732]: E0131 09:02:57.543252 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:59 crc kubenswrapper[4732]: I0131 09:02:59.542054 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:02:59 crc kubenswrapper[4732]: E0131 09:02:59.542196 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:02:59 crc kubenswrapper[4732]: I0131 09:02:59.542172 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:02:59 crc kubenswrapper[4732]: I0131 09:02:59.542336 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:02:59 crc kubenswrapper[4732]: I0131 09:02:59.542443 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:02:59 crc kubenswrapper[4732]: E0131 09:02:59.542441 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:02:59 crc kubenswrapper[4732]: E0131 09:02:59.542552 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:02:59 crc kubenswrapper[4732]: E0131 09:02:59.542648 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:02:59 crc kubenswrapper[4732]: I0131 09:02:59.544637 4732 scope.go:117] "RemoveContainer" containerID="b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1" Jan 31 09:02:59 crc kubenswrapper[4732]: E0131 09:02:59.544981 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8mtkt_openshift-ovn-kubernetes(82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" Jan 31 09:03:01 crc kubenswrapper[4732]: I0131 09:03:01.542655 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:01 crc kubenswrapper[4732]: I0131 09:03:01.542703 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:01 crc kubenswrapper[4732]: I0131 09:03:01.542743 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:01 crc kubenswrapper[4732]: E0131 09:03:01.543356 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:03:01 crc kubenswrapper[4732]: I0131 09:03:01.542800 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:01 crc kubenswrapper[4732]: E0131 09:03:01.543159 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:01 crc kubenswrapper[4732]: E0131 09:03:01.543494 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:01 crc kubenswrapper[4732]: E0131 09:03:01.543635 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:02 crc kubenswrapper[4732]: I0131 09:03:02.159415 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4mxsr_8e23192f-14db-41ef-af89-4a76e325d9c1/kube-multus/1.log" Jan 31 09:03:02 crc kubenswrapper[4732]: I0131 09:03:02.160004 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4mxsr_8e23192f-14db-41ef-af89-4a76e325d9c1/kube-multus/0.log" Jan 31 09:03:02 crc kubenswrapper[4732]: I0131 09:03:02.160067 4732 generic.go:334] "Generic (PLEG): container finished" podID="8e23192f-14db-41ef-af89-4a76e325d9c1" containerID="456969ec8d447fd7f9acd1803b222ebfb16b02c8dee1959dd936eec29aa1d617" exitCode=1 Jan 31 09:03:02 crc kubenswrapper[4732]: I0131 09:03:02.160104 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4mxsr" event={"ID":"8e23192f-14db-41ef-af89-4a76e325d9c1","Type":"ContainerDied","Data":"456969ec8d447fd7f9acd1803b222ebfb16b02c8dee1959dd936eec29aa1d617"} Jan 31 09:03:02 crc kubenswrapper[4732]: I0131 09:03:02.160149 4732 scope.go:117] "RemoveContainer" containerID="e6af50b44afec0e5f0757efda1ff5567845c4482609a82bafcbf74947c657c56" Jan 31 09:03:02 crc kubenswrapper[4732]: I0131 09:03:02.160720 4732 scope.go:117] "RemoveContainer" containerID="456969ec8d447fd7f9acd1803b222ebfb16b02c8dee1959dd936eec29aa1d617" Jan 31 09:03:02 crc kubenswrapper[4732]: E0131 09:03:02.160918 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4mxsr_openshift-multus(8e23192f-14db-41ef-af89-4a76e325d9c1)\"" pod="openshift-multus/multus-4mxsr" podUID="8e23192f-14db-41ef-af89-4a76e325d9c1" Jan 31 09:03:02 crc kubenswrapper[4732]: E0131 09:03:02.518795 4732 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 09:03:02 crc kubenswrapper[4732]: E0131 09:03:02.692310 4732 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:03:03 crc kubenswrapper[4732]: I0131 09:03:03.170265 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4mxsr_8e23192f-14db-41ef-af89-4a76e325d9c1/kube-multus/1.log" Jan 31 09:03:03 crc kubenswrapper[4732]: I0131 09:03:03.543270 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:03 crc kubenswrapper[4732]: I0131 09:03:03.543340 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:03 crc kubenswrapper[4732]: E0131 09:03:03.543500 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:03 crc kubenswrapper[4732]: I0131 09:03:03.543897 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:03 crc kubenswrapper[4732]: I0131 09:03:03.543921 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:03 crc kubenswrapper[4732]: E0131 09:03:03.544024 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:03 crc kubenswrapper[4732]: E0131 09:03:03.544257 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:03:03 crc kubenswrapper[4732]: E0131 09:03:03.544396 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:05 crc kubenswrapper[4732]: I0131 09:03:05.542460 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:05 crc kubenswrapper[4732]: I0131 09:03:05.542560 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:05 crc kubenswrapper[4732]: E0131 09:03:05.543609 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:05 crc kubenswrapper[4732]: I0131 09:03:05.542615 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:05 crc kubenswrapper[4732]: E0131 09:03:05.543787 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:05 crc kubenswrapper[4732]: I0131 09:03:05.542460 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:05 crc kubenswrapper[4732]: E0131 09:03:05.543944 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:05 crc kubenswrapper[4732]: E0131 09:03:05.544099 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:03:07 crc kubenswrapper[4732]: I0131 09:03:07.542119 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:07 crc kubenswrapper[4732]: I0131 09:03:07.542197 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:07 crc kubenswrapper[4732]: I0131 09:03:07.542910 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:07 crc kubenswrapper[4732]: E0131 09:03:07.543130 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:03:07 crc kubenswrapper[4732]: I0131 09:03:07.543207 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:07 crc kubenswrapper[4732]: E0131 09:03:07.543350 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:07 crc kubenswrapper[4732]: E0131 09:03:07.543400 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:07 crc kubenswrapper[4732]: E0131 09:03:07.543488 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:07 crc kubenswrapper[4732]: E0131 09:03:07.694218 4732 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:03:09 crc kubenswrapper[4732]: I0131 09:03:09.541705 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:09 crc kubenswrapper[4732]: I0131 09:03:09.541707 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:09 crc kubenswrapper[4732]: E0131 09:03:09.542172 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:09 crc kubenswrapper[4732]: I0131 09:03:09.541743 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:09 crc kubenswrapper[4732]: I0131 09:03:09.541738 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:09 crc kubenswrapper[4732]: E0131 09:03:09.542241 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:09 crc kubenswrapper[4732]: E0131 09:03:09.542408 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:03:09 crc kubenswrapper[4732]: E0131 09:03:09.542576 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:11 crc kubenswrapper[4732]: I0131 09:03:11.541585 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:11 crc kubenswrapper[4732]: I0131 09:03:11.541687 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:11 crc kubenswrapper[4732]: E0131 09:03:11.541765 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:11 crc kubenswrapper[4732]: E0131 09:03:11.541828 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:03:11 crc kubenswrapper[4732]: I0131 09:03:11.541704 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:11 crc kubenswrapper[4732]: I0131 09:03:11.541973 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:11 crc kubenswrapper[4732]: E0131 09:03:11.542082 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:11 crc kubenswrapper[4732]: E0131 09:03:11.542188 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:12 crc kubenswrapper[4732]: I0131 09:03:12.543801 4732 scope.go:117] "RemoveContainer" containerID="b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1" Jan 31 09:03:12 crc kubenswrapper[4732]: E0131 09:03:12.695918 4732 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:03:13 crc kubenswrapper[4732]: I0131 09:03:13.210124 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/3.log" Jan 31 09:03:13 crc kubenswrapper[4732]: I0131 09:03:13.214222 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerStarted","Data":"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4"} Jan 31 09:03:13 crc kubenswrapper[4732]: I0131 09:03:13.214818 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:03:13 crc kubenswrapper[4732]: I0131 09:03:13.254960 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podStartSLOduration=106.254926671 podStartE2EDuration="1m46.254926671s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:13.251969774 +0000 UTC m=+131.557845988" watchObservedRunningTime="2026-01-31 09:03:13.254926671 +0000 UTC m=+131.560802915" Jan 31 09:03:13 crc kubenswrapper[4732]: I0131 09:03:13.541140 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7fgvm"] Jan 31 09:03:13 crc kubenswrapper[4732]: I0131 09:03:13.541637 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:13 crc kubenswrapper[4732]: I0131 09:03:13.541797 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:13 crc kubenswrapper[4732]: E0131 09:03:13.541795 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:03:13 crc kubenswrapper[4732]: E0131 09:03:13.541893 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:13 crc kubenswrapper[4732]: I0131 09:03:13.541888 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:13 crc kubenswrapper[4732]: I0131 09:03:13.541941 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:13 crc kubenswrapper[4732]: E0131 09:03:13.542084 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:13 crc kubenswrapper[4732]: E0131 09:03:13.542183 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:14 crc kubenswrapper[4732]: I0131 09:03:14.217518 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:14 crc kubenswrapper[4732]: E0131 09:03:14.217845 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:03:15 crc kubenswrapper[4732]: I0131 09:03:15.542299 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:15 crc kubenswrapper[4732]: I0131 09:03:15.542325 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:15 crc kubenswrapper[4732]: E0131 09:03:15.543457 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:15 crc kubenswrapper[4732]: I0131 09:03:15.542356 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:15 crc kubenswrapper[4732]: E0131 09:03:15.543502 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:15 crc kubenswrapper[4732]: E0131 09:03:15.543920 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:16 crc kubenswrapper[4732]: I0131 09:03:16.542384 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:16 crc kubenswrapper[4732]: I0131 09:03:16.543036 4732 scope.go:117] "RemoveContainer" containerID="456969ec8d447fd7f9acd1803b222ebfb16b02c8dee1959dd936eec29aa1d617" Jan 31 09:03:16 crc kubenswrapper[4732]: E0131 09:03:16.543945 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:03:17 crc kubenswrapper[4732]: I0131 09:03:17.229917 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4mxsr_8e23192f-14db-41ef-af89-4a76e325d9c1/kube-multus/1.log" Jan 31 09:03:17 crc kubenswrapper[4732]: I0131 09:03:17.230249 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4mxsr" event={"ID":"8e23192f-14db-41ef-af89-4a76e325d9c1","Type":"ContainerStarted","Data":"98e5c23e9a8bde55626defc76f20f8510954c4ef79d762950e0790a7de4dce4f"} Jan 31 09:03:17 crc kubenswrapper[4732]: I0131 09:03:17.541691 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:17 crc kubenswrapper[4732]: I0131 09:03:17.541725 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:17 crc kubenswrapper[4732]: E0131 09:03:17.541868 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:17 crc kubenswrapper[4732]: E0131 09:03:17.541930 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:17 crc kubenswrapper[4732]: I0131 09:03:17.542223 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:17 crc kubenswrapper[4732]: E0131 09:03:17.542464 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:17 crc kubenswrapper[4732]: E0131 09:03:17.697437 4732 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:03:18 crc kubenswrapper[4732]: I0131 09:03:18.542541 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:18 crc kubenswrapper[4732]: E0131 09:03:18.542752 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:03:19 crc kubenswrapper[4732]: I0131 09:03:19.542199 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:19 crc kubenswrapper[4732]: I0131 09:03:19.542291 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:19 crc kubenswrapper[4732]: E0131 09:03:19.542395 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:19 crc kubenswrapper[4732]: E0131 09:03:19.542515 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:19 crc kubenswrapper[4732]: I0131 09:03:19.542225 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:19 crc kubenswrapper[4732]: E0131 09:03:19.542881 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:20 crc kubenswrapper[4732]: I0131 09:03:20.542316 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:20 crc kubenswrapper[4732]: E0131 09:03:20.542569 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:03:21 crc kubenswrapper[4732]: I0131 09:03:21.541793 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:21 crc kubenswrapper[4732]: I0131 09:03:21.541838 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:21 crc kubenswrapper[4732]: E0131 09:03:21.541977 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 09:03:21 crc kubenswrapper[4732]: E0131 09:03:21.542227 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 09:03:21 crc kubenswrapper[4732]: I0131 09:03:21.542293 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:21 crc kubenswrapper[4732]: E0131 09:03:21.542519 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.542226 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:22 crc kubenswrapper[4732]: E0131 09:03:22.543110 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7fgvm" podUID="3bd29a31-1a47-40da-afc5-6c4423067083" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.802734 4732 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.856626 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tg4xc"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.857426 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.858083 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.858853 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.863507 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.863938 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pxn6w"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.864187 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hzs92"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.864499 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.866035 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.866220 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.866851 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.867040 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.867327 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.867582 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.867643 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.867981 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.868072 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.868334 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.868528 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.869133 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.869425 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.869622 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.869742 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.874513 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-54nxd"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.874797 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.875478 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.875844 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.876442 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xprfh"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.877301 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.878783 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.880217 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.883055 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.883280 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.889037 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.889453 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.890522 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.891494 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.891979 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.892995 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.915586 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-76d6v"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.916129 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.916698 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.917162 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.917302 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.917409 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.917790 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.918290 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.918411 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.918477 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.918551 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.918702 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.918816 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.918909 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.918935 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.918989 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.919279 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.919540 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.919740 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.919819 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-rt2jr"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.920300 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rt2jr" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.920391 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.920474 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.920604 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.920804 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.921098 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.921391 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.921707 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.922330 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.922648 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.923057 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.923251 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.923371 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.923424 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.923550 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.923718 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.923818 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.923892 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.923963 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.924060 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.923320 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.924648 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.926561 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.926716 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.935047 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.935248 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.935887 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.936011 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.937006 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.937223 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.937833 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8t8ks"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.938233 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.938528 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.938732 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.938979 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.939105 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c8t6l"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.939747 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.939165 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.939242 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.939839 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.939745 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.940036 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.941391 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.945218 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.950456 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.950880 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/62bbd0d6-eba8-4737-9608-3f7a3dd6a157-machine-approver-tls\") pod \"machine-approver-56656f9798-sf8wd\" (UID: \"62bbd0d6-eba8-4737-9608-3f7a3dd6a157\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.950918 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81b523ca-b564-45d4-bad5-f7e236f2e6d0-node-pullsecrets\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.950946 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6377a401-b10b-455a-8906-f6706302b91f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.950968 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r24m4\" (UniqueName: \"kubernetes.io/projected/922314ab-f199-4117-acab-bc641c1cda57-kube-api-access-r24m4\") pod \"openshift-apiserver-operator-796bbdcf4f-ncqs4\" (UID: \"922314ab-f199-4117-acab-bc641c1cda57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.950990 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6377a401-b10b-455a-8906-f6706302b91f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951011 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541ea3c2-891c-4c3e-81fd-9d340112c62b-config\") pod \"route-controller-manager-6576b87f9c-5s6dp\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951044 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvpv9\" (UniqueName: \"kubernetes.io/projected/edb14eaf-7738-4139-9b1b-9557e7e37ffc-kube-api-access-hvpv9\") pod \"cluster-samples-operator-665b6dd947-vkrgj\" (UID: \"edb14eaf-7738-4139-9b1b-9557e7e37ffc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951068 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c9233d-0b27-4994-bc3d-62d4ec86a4ec-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r9cz8\" (UID: \"16c9233d-0b27-4994-bc3d-62d4ec86a4ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951089 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5nb\" (UniqueName: \"kubernetes.io/projected/62bbd0d6-eba8-4737-9608-3f7a3dd6a157-kube-api-access-lc5nb\") pod \"machine-approver-56656f9798-sf8wd\" (UID: \"62bbd0d6-eba8-4737-9608-3f7a3dd6a157\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951110 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a-images\") pod \"machine-api-operator-5694c8668f-54nxd\" (UID: \"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951132 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf682\" (UniqueName: \"kubernetes.io/projected/16c9233d-0b27-4994-bc3d-62d4ec86a4ec-kube-api-access-xf682\") pod \"cluster-image-registry-operator-dc59b4c8b-r9cz8\" (UID: \"16c9233d-0b27-4994-bc3d-62d4ec86a4ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951166 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62bbd0d6-eba8-4737-9608-3f7a3dd6a157-auth-proxy-config\") pod \"machine-approver-56656f9798-sf8wd\" (UID: \"62bbd0d6-eba8-4737-9608-3f7a3dd6a157\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951189 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/499830ff-8add-4caf-b469-d1cbde569fb7-trusted-ca\") pod \"console-operator-58897d9998-76d6v\" (UID: \"499830ff-8add-4caf-b469-d1cbde569fb7\") " pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951211 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxjwq\" (UniqueName: \"kubernetes.io/projected/e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a-kube-api-access-lxjwq\") pod \"machine-api-operator-5694c8668f-54nxd\" (UID: \"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951232 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/922314ab-f199-4117-acab-bc641c1cda57-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ncqs4\" (UID: \"922314ab-f199-4117-acab-bc641c1cda57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951254 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch8m2\" (UniqueName: \"kubernetes.io/projected/576b5a44-3c4c-4905-8d89-caed3b1eb43f-kube-api-access-ch8m2\") pod \"openshift-config-operator-7777fb866f-hzs92\" (UID: \"576b5a44-3c4c-4905-8d89-caed3b1eb43f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951277 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c9233d-0b27-4994-bc3d-62d4ec86a4ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r9cz8\" (UID: \"16c9233d-0b27-4994-bc3d-62d4ec86a4ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951723 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-config\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951758 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62bbd0d6-eba8-4737-9608-3f7a3dd6a157-config\") pod \"machine-approver-56656f9798-sf8wd\" (UID: \"62bbd0d6-eba8-4737-9608-3f7a3dd6a157\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951785 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6377a401-b10b-455a-8906-f6706302b91f-audit-policies\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951811 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38051ff1-1715-41dd-aa28-53aea32c8e05-config\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951835 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6377a401-b10b-455a-8906-f6706302b91f-encryption-config\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951858 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/541ea3c2-891c-4c3e-81fd-9d340112c62b-client-ca\") pod \"route-controller-manager-6576b87f9c-5s6dp\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951879 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhhgt\" (UniqueName: \"kubernetes.io/projected/541ea3c2-891c-4c3e-81fd-9d340112c62b-kube-api-access-jhhgt\") pod \"route-controller-manager-6576b87f9c-5s6dp\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951901 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdzrp\" (UniqueName: \"kubernetes.io/projected/81e1781e-a935-4f3f-b2aa-9a0807f43c73-kube-api-access-wdzrp\") pod \"downloads-7954f5f757-rt2jr\" (UID: \"81e1781e-a935-4f3f-b2aa-9a0807f43c73\") " pod="openshift-console/downloads-7954f5f757-rt2jr" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951924 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-audit\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951943 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9mtk\" (UniqueName: \"kubernetes.io/projected/81b523ca-b564-45d4-bad5-f7e236f2e6d0-kube-api-access-s9mtk\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951963 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/576b5a44-3c4c-4905-8d89-caed3b1eb43f-serving-cert\") pod \"openshift-config-operator-7777fb866f-hzs92\" (UID: \"576b5a44-3c4c-4905-8d89-caed3b1eb43f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.951997 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-etcd-serving-ca\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.952021 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81b523ca-b564-45d4-bad5-f7e236f2e6d0-encryption-config\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.952041 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6377a401-b10b-455a-8906-f6706302b91f-serving-cert\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.952072 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.952094 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38051ff1-1715-41dd-aa28-53aea32c8e05-serving-cert\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.952116 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/576b5a44-3c4c-4905-8d89-caed3b1eb43f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hzs92\" (UID: \"576b5a44-3c4c-4905-8d89-caed3b1eb43f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.952137 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81b523ca-b564-45d4-bad5-f7e236f2e6d0-serving-cert\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.952158 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6377a401-b10b-455a-8906-f6706302b91f-audit-dir\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.952182 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bws5c\" (UniqueName: \"kubernetes.io/projected/219a04b6-e7bd-4138-bcc7-4f650537aa24-kube-api-access-bws5c\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.953788 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-client-ca\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.953825 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p24t7\" (UniqueName: \"kubernetes.io/projected/38051ff1-1715-41dd-aa28-53aea32c8e05-kube-api-access-p24t7\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.953846 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6377a401-b10b-455a-8906-f6706302b91f-etcd-client\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.953882 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/922314ab-f199-4117-acab-bc641c1cda57-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ncqs4\" (UID: \"922314ab-f199-4117-acab-bc641c1cda57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.953903 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.953921 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81b523ca-b564-45d4-bad5-f7e236f2e6d0-audit-dir\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.954175 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-config\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.954204 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541ea3c2-891c-4c3e-81fd-9d340112c62b-serving-cert\") pod \"route-controller-manager-6576b87f9c-5s6dp\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.954225 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38051ff1-1715-41dd-aa28-53aea32c8e05-service-ca-bundle\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.954247 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-54nxd\" (UID: \"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.954765 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hzs92"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.952040 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.954910 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/219a04b6-e7bd-4138-bcc7-4f650537aa24-serving-cert\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.954974 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c9233d-0b27-4994-bc3d-62d4ec86a4ec-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r9cz8\" (UID: \"16c9233d-0b27-4994-bc3d-62d4ec86a4ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.954993 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.952316 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.952884 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.955008 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81b523ca-b564-45d4-bad5-f7e236f2e6d0-etcd-client\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.955196 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38051ff1-1715-41dd-aa28-53aea32c8e05-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.955216 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.955223 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9ndf\" (UniqueName: \"kubernetes.io/projected/6377a401-b10b-455a-8906-f6706302b91f-kube-api-access-v9ndf\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.954268 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.955249 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/499830ff-8add-4caf-b469-d1cbde569fb7-serving-cert\") pod \"console-operator-58897d9998-76d6v\" (UID: \"499830ff-8add-4caf-b469-d1cbde569fb7\") " pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.955275 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-image-import-ca\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.954363 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.955297 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a-config\") pod \"machine-api-operator-5694c8668f-54nxd\" (UID: \"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.955327 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/edb14eaf-7738-4139-9b1b-9557e7e37ffc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vkrgj\" (UID: \"edb14eaf-7738-4139-9b1b-9557e7e37ffc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.954625 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.955378 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.954755 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.955442 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/499830ff-8add-4caf-b469-d1cbde569fb7-config\") pod \"console-operator-58897d9998-76d6v\" (UID: \"499830ff-8add-4caf-b469-d1cbde569fb7\") " pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.955489 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr859\" (UniqueName: \"kubernetes.io/projected/499830ff-8add-4caf-b469-d1cbde569fb7-kube-api-access-jr859\") pod \"console-operator-58897d9998-76d6v\" (UID: \"499830ff-8add-4caf-b469-d1cbde569fb7\") " pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.956725 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.957332 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tg4xc"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.969471 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.972269 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.972702 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.974558 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.975195 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.975427 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.976691 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.977123 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.977758 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pxn6w"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.974886 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.989526 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.989903 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.991974 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.996105 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk"] Jan 31 09:03:22 crc kubenswrapper[4732]: I0131 09:03:22.996351 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.000760 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.003515 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.009202 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-99dtb"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.009992 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.011643 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.012377 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.012529 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vpnm5"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.012912 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.013086 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vpnm5" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.017341 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fsss9"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.019076 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.019468 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.019542 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.020405 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-76d6v"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.020635 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.020750 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.021174 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.021628 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.023756 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.024447 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.024965 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-54nxd"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.026164 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c8t6l"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.028681 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h5q9f"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.029191 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.030590 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.031092 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.031707 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.032009 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.033500 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cx8cr"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.033902 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx8cr" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.069790 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kgqfp"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.070162 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r24m4\" (UniqueName: \"kubernetes.io/projected/922314ab-f199-4117-acab-bc641c1cda57-kube-api-access-r24m4\") pod \"openshift-apiserver-operator-796bbdcf4f-ncqs4\" (UID: \"922314ab-f199-4117-acab-bc641c1cda57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.070196 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6377a401-b10b-455a-8906-f6706302b91f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.070214 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541ea3c2-891c-4c3e-81fd-9d340112c62b-config\") pod \"route-controller-manager-6576b87f9c-5s6dp\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.070243 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.070265 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvpv9\" (UniqueName: \"kubernetes.io/projected/edb14eaf-7738-4139-9b1b-9557e7e37ffc-kube-api-access-hvpv9\") pod \"cluster-samples-operator-665b6dd947-vkrgj\" (UID: \"edb14eaf-7738-4139-9b1b-9557e7e37ffc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.070284 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c9233d-0b27-4994-bc3d-62d4ec86a4ec-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r9cz8\" (UID: \"16c9233d-0b27-4994-bc3d-62d4ec86a4ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.070301 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b35d0df8-53f0-4787-b0b4-c93be28f0127-console-serving-cert\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.070317 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b35d0df8-53f0-4787-b0b4-c93be28f0127-oauth-serving-cert\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.070335 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc5nb\" (UniqueName: \"kubernetes.io/projected/62bbd0d6-eba8-4737-9608-3f7a3dd6a157-kube-api-access-lc5nb\") pod \"machine-approver-56656f9798-sf8wd\" (UID: \"62bbd0d6-eba8-4737-9608-3f7a3dd6a157\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.070373 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.070825 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071050 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kgqfp" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071623 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541ea3c2-891c-4c3e-81fd-9d340112c62b-config\") pod \"route-controller-manager-6576b87f9c-5s6dp\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071692 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071712 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/639dacb9-2ea3-49d2-b5c4-996992c8e16a-etcd-ca\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071728 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/639dacb9-2ea3-49d2-b5c4-996992c8e16a-serving-cert\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071748 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf682\" (UniqueName: \"kubernetes.io/projected/16c9233d-0b27-4994-bc3d-62d4ec86a4ec-kube-api-access-xf682\") pod \"cluster-image-registry-operator-dc59b4c8b-r9cz8\" (UID: \"16c9233d-0b27-4994-bc3d-62d4ec86a4ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071767 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a-images\") pod \"machine-api-operator-5694c8668f-54nxd\" (UID: \"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071795 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62bbd0d6-eba8-4737-9608-3f7a3dd6a157-auth-proxy-config\") pod \"machine-approver-56656f9798-sf8wd\" (UID: \"62bbd0d6-eba8-4737-9608-3f7a3dd6a157\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071811 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071829 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071849 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz728\" (UniqueName: \"kubernetes.io/projected/b35d0df8-53f0-4787-b0b4-c93be28f0127-kube-api-access-sz728\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071873 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/499830ff-8add-4caf-b469-d1cbde569fb7-trusted-ca\") pod \"console-operator-58897d9998-76d6v\" (UID: \"499830ff-8add-4caf-b469-d1cbde569fb7\") " pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071902 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxjwq\" (UniqueName: \"kubernetes.io/projected/e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a-kube-api-access-lxjwq\") pod \"machine-api-operator-5694c8668f-54nxd\" (UID: \"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071924 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b35d0df8-53f0-4787-b0b4-c93be28f0127-trusted-ca-bundle\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071947 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/922314ab-f199-4117-acab-bc641c1cda57-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ncqs4\" (UID: \"922314ab-f199-4117-acab-bc641c1cda57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071965 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639dacb9-2ea3-49d2-b5c4-996992c8e16a-config\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.071983 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62bbd0d6-eba8-4737-9608-3f7a3dd6a157-config\") pod \"machine-approver-56656f9798-sf8wd\" (UID: \"62bbd0d6-eba8-4737-9608-3f7a3dd6a157\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072003 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch8m2\" (UniqueName: \"kubernetes.io/projected/576b5a44-3c4c-4905-8d89-caed3b1eb43f-kube-api-access-ch8m2\") pod \"openshift-config-operator-7777fb866f-hzs92\" (UID: \"576b5a44-3c4c-4905-8d89-caed3b1eb43f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072021 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c9233d-0b27-4994-bc3d-62d4ec86a4ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r9cz8\" (UID: \"16c9233d-0b27-4994-bc3d-62d4ec86a4ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072036 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-config\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072053 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6377a401-b10b-455a-8906-f6706302b91f-audit-policies\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072079 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-audit-policies\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072095 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-audit-dir\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072110 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072130 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072150 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38051ff1-1715-41dd-aa28-53aea32c8e05-config\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072164 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6377a401-b10b-455a-8906-f6706302b91f-encryption-config\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072179 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/541ea3c2-891c-4c3e-81fd-9d340112c62b-client-ca\") pod \"route-controller-manager-6576b87f9c-5s6dp\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072194 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072211 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-audit\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072228 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9mtk\" (UniqueName: \"kubernetes.io/projected/81b523ca-b564-45d4-bad5-f7e236f2e6d0-kube-api-access-s9mtk\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072247 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhhgt\" (UniqueName: \"kubernetes.io/projected/541ea3c2-891c-4c3e-81fd-9d340112c62b-kube-api-access-jhhgt\") pod \"route-controller-manager-6576b87f9c-5s6dp\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072261 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdzrp\" (UniqueName: \"kubernetes.io/projected/81e1781e-a935-4f3f-b2aa-9a0807f43c73-kube-api-access-wdzrp\") pod \"downloads-7954f5f757-rt2jr\" (UID: \"81e1781e-a935-4f3f-b2aa-9a0807f43c73\") " pod="openshift-console/downloads-7954f5f757-rt2jr" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072277 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cba3ef6a-8439-4317-bae9-01618d78512a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dv6hv\" (UID: \"cba3ef6a-8439-4317-bae9-01618d78512a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072304 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/576b5a44-3c4c-4905-8d89-caed3b1eb43f-serving-cert\") pod \"openshift-config-operator-7777fb866f-hzs92\" (UID: \"576b5a44-3c4c-4905-8d89-caed3b1eb43f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072320 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64e4fed2-f31d-4d3f-ad77-d55fdddacb4d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-44wvz\" (UID: \"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072334 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b35d0df8-53f0-4787-b0b4-c93be28f0127-console-oauth-config\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072357 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-etcd-serving-ca\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072371 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81b523ca-b564-45d4-bad5-f7e236f2e6d0-encryption-config\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072387 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6377a401-b10b-455a-8906-f6706302b91f-serving-cert\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072402 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/639dacb9-2ea3-49d2-b5c4-996992c8e16a-etcd-client\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072422 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072436 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38051ff1-1715-41dd-aa28-53aea32c8e05-serving-cert\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072451 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cba3ef6a-8439-4317-bae9-01618d78512a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dv6hv\" (UID: \"cba3ef6a-8439-4317-bae9-01618d78512a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072469 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/576b5a44-3c4c-4905-8d89-caed3b1eb43f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hzs92\" (UID: \"576b5a44-3c4c-4905-8d89-caed3b1eb43f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072486 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81b523ca-b564-45d4-bad5-f7e236f2e6d0-serving-cert\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072501 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6377a401-b10b-455a-8906-f6706302b91f-audit-dir\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072517 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072537 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bws5c\" (UniqueName: \"kubernetes.io/projected/219a04b6-e7bd-4138-bcc7-4f650537aa24-kube-api-access-bws5c\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072553 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqgz8\" (UniqueName: \"kubernetes.io/projected/8cc29c02-baeb-4f46-92d6-684343509ae1-kube-api-access-vqgz8\") pod \"dns-operator-744455d44c-vpnm5\" (UID: \"8cc29c02-baeb-4f46-92d6-684343509ae1\") " pod="openshift-dns-operator/dns-operator-744455d44c-vpnm5" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072569 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/639dacb9-2ea3-49d2-b5c4-996992c8e16a-etcd-service-ca\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072584 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxdbz\" (UniqueName: \"kubernetes.io/projected/639dacb9-2ea3-49d2-b5c4-996992c8e16a-kube-api-access-qxdbz\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072609 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/922314ab-f199-4117-acab-bc641c1cda57-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ncqs4\" (UID: \"922314ab-f199-4117-acab-bc641c1cda57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072624 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072638 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81b523ca-b564-45d4-bad5-f7e236f2e6d0-audit-dir\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072654 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-client-ca\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072689 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p24t7\" (UniqueName: \"kubernetes.io/projected/38051ff1-1715-41dd-aa28-53aea32c8e05-kube-api-access-p24t7\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072708 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6377a401-b10b-455a-8906-f6706302b91f-etcd-client\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072725 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvc79\" (UniqueName: \"kubernetes.io/projected/64e4fed2-f31d-4d3f-ad77-d55fdddacb4d-kube-api-access-nvc79\") pod \"ingress-operator-5b745b69d9-44wvz\" (UID: \"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072745 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-config\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072760 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541ea3c2-891c-4c3e-81fd-9d340112c62b-serving-cert\") pod \"route-controller-manager-6576b87f9c-5s6dp\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072775 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/64e4fed2-f31d-4d3f-ad77-d55fdddacb4d-metrics-tls\") pod \"ingress-operator-5b745b69d9-44wvz\" (UID: \"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072790 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b35d0df8-53f0-4787-b0b4-c93be28f0127-console-config\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072808 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/219a04b6-e7bd-4138-bcc7-4f650537aa24-serving-cert\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072824 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38051ff1-1715-41dd-aa28-53aea32c8e05-service-ca-bundle\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072840 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-54nxd\" (UID: \"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072857 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c9233d-0b27-4994-bc3d-62d4ec86a4ec-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r9cz8\" (UID: \"16c9233d-0b27-4994-bc3d-62d4ec86a4ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072874 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cc29c02-baeb-4f46-92d6-684343509ae1-metrics-tls\") pod \"dns-operator-744455d44c-vpnm5\" (UID: \"8cc29c02-baeb-4f46-92d6-684343509ae1\") " pod="openshift-dns-operator/dns-operator-744455d44c-vpnm5" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072890 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81b523ca-b564-45d4-bad5-f7e236f2e6d0-etcd-client\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072906 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38051ff1-1715-41dd-aa28-53aea32c8e05-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072923 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9ndf\" (UniqueName: \"kubernetes.io/projected/6377a401-b10b-455a-8906-f6706302b91f-kube-api-access-v9ndf\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072940 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072961 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lsmw\" (UniqueName: \"kubernetes.io/projected/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-kube-api-access-9lsmw\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.072983 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b35d0df8-53f0-4787-b0b4-c93be28f0127-service-ca\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.073002 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/499830ff-8add-4caf-b469-d1cbde569fb7-serving-cert\") pod \"console-operator-58897d9998-76d6v\" (UID: \"499830ff-8add-4caf-b469-d1cbde569fb7\") " pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.073019 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-image-import-ca\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.073035 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cba3ef6a-8439-4317-bae9-01618d78512a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dv6hv\" (UID: \"cba3ef6a-8439-4317-bae9-01618d78512a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.073056 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/edb14eaf-7738-4139-9b1b-9557e7e37ffc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vkrgj\" (UID: \"edb14eaf-7738-4139-9b1b-9557e7e37ffc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.073072 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a-config\") pod \"machine-api-operator-5694c8668f-54nxd\" (UID: \"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.073097 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64e4fed2-f31d-4d3f-ad77-d55fdddacb4d-trusted-ca\") pod \"ingress-operator-5b745b69d9-44wvz\" (UID: \"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.073117 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/499830ff-8add-4caf-b469-d1cbde569fb7-config\") pod \"console-operator-58897d9998-76d6v\" (UID: \"499830ff-8add-4caf-b469-d1cbde569fb7\") " pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.073133 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr859\" (UniqueName: \"kubernetes.io/projected/499830ff-8add-4caf-b469-d1cbde569fb7-kube-api-access-jr859\") pod \"console-operator-58897d9998-76d6v\" (UID: \"499830ff-8add-4caf-b469-d1cbde569fb7\") " pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.073150 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.073171 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/62bbd0d6-eba8-4737-9608-3f7a3dd6a157-machine-approver-tls\") pod \"machine-approver-56656f9798-sf8wd\" (UID: \"62bbd0d6-eba8-4737-9608-3f7a3dd6a157\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.073186 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81b523ca-b564-45d4-bad5-f7e236f2e6d0-node-pullsecrets\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.073202 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6377a401-b10b-455a-8906-f6706302b91f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.073218 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.078750 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.080005 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a-images\") pod \"machine-api-operator-5694c8668f-54nxd\" (UID: \"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.080066 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6377a401-b10b-455a-8906-f6706302b91f-audit-policies\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.081026 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62bbd0d6-eba8-4737-9608-3f7a3dd6a157-auth-proxy-config\") pod \"machine-approver-56656f9798-sf8wd\" (UID: \"62bbd0d6-eba8-4737-9608-3f7a3dd6a157\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.081120 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38051ff1-1715-41dd-aa28-53aea32c8e05-config\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.082584 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8t8ks"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.083007 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/541ea3c2-891c-4c3e-81fd-9d340112c62b-client-ca\") pod \"route-controller-manager-6576b87f9c-5s6dp\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.084365 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.085196 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-etcd-serving-ca\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.087105 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6377a401-b10b-455a-8906-f6706302b91f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.089062 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38051ff1-1715-41dd-aa28-53aea32c8e05-serving-cert\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.089455 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81b523ca-b564-45d4-bad5-f7e236f2e6d0-audit-dir\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.091935 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81b523ca-b564-45d4-bad5-f7e236f2e6d0-node-pullsecrets\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.092070 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62bbd0d6-eba8-4737-9608-3f7a3dd6a157-config\") pod \"machine-approver-56656f9798-sf8wd\" (UID: \"62bbd0d6-eba8-4737-9608-3f7a3dd6a157\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.092712 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6377a401-b10b-455a-8906-f6706302b91f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.093298 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a-config\") pod \"machine-api-operator-5694c8668f-54nxd\" (UID: \"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.095152 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-audit\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.095228 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-image-import-ca\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.095315 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6377a401-b10b-455a-8906-f6706302b91f-audit-dir\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.095593 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c9233d-0b27-4994-bc3d-62d4ec86a4ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r9cz8\" (UID: \"16c9233d-0b27-4994-bc3d-62d4ec86a4ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.096438 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/576b5a44-3c4c-4905-8d89-caed3b1eb43f-serving-cert\") pod \"openshift-config-operator-7777fb866f-hzs92\" (UID: \"576b5a44-3c4c-4905-8d89-caed3b1eb43f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.096495 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/576b5a44-3c4c-4905-8d89-caed3b1eb43f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hzs92\" (UID: \"576b5a44-3c4c-4905-8d89-caed3b1eb43f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.097634 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81b523ca-b564-45d4-bad5-f7e236f2e6d0-encryption-config\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.099975 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.101894 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/499830ff-8add-4caf-b469-d1cbde569fb7-serving-cert\") pod \"console-operator-58897d9998-76d6v\" (UID: \"499830ff-8add-4caf-b469-d1cbde569fb7\") " pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.104106 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-client-ca\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.104841 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541ea3c2-891c-4c3e-81fd-9d340112c62b-serving-cert\") pod \"route-controller-manager-6576b87f9c-5s6dp\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.105435 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-config\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.107067 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/922314ab-f199-4117-acab-bc641c1cda57-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ncqs4\" (UID: \"922314ab-f199-4117-acab-bc641c1cda57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.108174 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81b523ca-b564-45d4-bad5-f7e236f2e6d0-etcd-client\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.110045 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/499830ff-8add-4caf-b469-d1cbde569fb7-trusted-ca\") pod \"console-operator-58897d9998-76d6v\" (UID: \"499830ff-8add-4caf-b469-d1cbde569fb7\") " pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.110094 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.110583 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.111202 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38051ff1-1715-41dd-aa28-53aea32c8e05-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.111776 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.112282 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/62bbd0d6-eba8-4737-9608-3f7a3dd6a157-machine-approver-tls\") pod \"machine-approver-56656f9798-sf8wd\" (UID: \"62bbd0d6-eba8-4737-9608-3f7a3dd6a157\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.112791 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.114534 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6377a401-b10b-455a-8906-f6706302b91f-etcd-client\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.114567 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c9233d-0b27-4994-bc3d-62d4ec86a4ec-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r9cz8\" (UID: \"16c9233d-0b27-4994-bc3d-62d4ec86a4ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.116696 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6377a401-b10b-455a-8906-f6706302b91f-serving-cert\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.117812 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.118648 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38051ff1-1715-41dd-aa28-53aea32c8e05-service-ca-bundle\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.119832 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81b523ca-b564-45d4-bad5-f7e236f2e6d0-config\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.120272 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.120531 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.122055 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.122131 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81b523ca-b564-45d4-bad5-f7e236f2e6d0-serving-cert\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.122065 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wqf9f"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.122244 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-54nxd\" (UID: \"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.122567 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.123029 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.123031 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6377a401-b10b-455a-8906-f6706302b91f-encryption-config\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.123642 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/219a04b6-e7bd-4138-bcc7-4f650537aa24-serving-cert\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.123702 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.124067 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.131902 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/922314ab-f199-4117-acab-bc641c1cda57-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ncqs4\" (UID: \"922314ab-f199-4117-acab-bc641c1cda57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.132075 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.132193 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.132282 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/edb14eaf-7738-4139-9b1b-9557e7e37ffc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vkrgj\" (UID: \"edb14eaf-7738-4139-9b1b-9557e7e37ffc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.135827 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bzk95"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.137176 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-99dtb"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.137272 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.140703 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vpnm5"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.140779 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ljds4"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.141580 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.142186 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.143133 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.145857 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.146544 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.155366 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.156198 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fsss9"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.158533 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hn5wx"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.159282 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hn5wx" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.160624 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.163755 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/499830ff-8add-4caf-b469-d1cbde569fb7-config\") pod \"console-operator-58897d9998-76d6v\" (UID: \"499830ff-8add-4caf-b469-d1cbde569fb7\") " pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.170703 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.171421 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.173207 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.173836 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.173859 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64e4fed2-f31d-4d3f-ad77-d55fdddacb4d-trusted-ca\") pod \"ingress-operator-5b745b69d9-44wvz\" (UID: \"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.173915 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.173961 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.174007 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.174064 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b35d0df8-53f0-4787-b0b4-c93be28f0127-console-serving-cert\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.175042 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.175331 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.175779 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b35d0df8-53f0-4787-b0b4-c93be28f0127-oauth-serving-cert\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176012 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176036 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/639dacb9-2ea3-49d2-b5c4-996992c8e16a-etcd-ca\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176069 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/639dacb9-2ea3-49d2-b5c4-996992c8e16a-serving-cert\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176168 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176255 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176282 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz728\" (UniqueName: \"kubernetes.io/projected/b35d0df8-53f0-4787-b0b4-c93be28f0127-kube-api-access-sz728\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176580 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b35d0df8-53f0-4787-b0b4-c93be28f0127-oauth-serving-cert\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176736 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b35d0df8-53f0-4787-b0b4-c93be28f0127-trusted-ca-bundle\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176791 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639dacb9-2ea3-49d2-b5c4-996992c8e16a-config\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176830 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-audit-dir\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176854 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176882 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-audit-policies\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176910 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.176988 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177038 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cba3ef6a-8439-4317-bae9-01618d78512a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dv6hv\" (UID: \"cba3ef6a-8439-4317-bae9-01618d78512a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177071 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64e4fed2-f31d-4d3f-ad77-d55fdddacb4d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-44wvz\" (UID: \"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177435 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b35d0df8-53f0-4787-b0b4-c93be28f0127-console-oauth-config\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177476 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/639dacb9-2ea3-49d2-b5c4-996992c8e16a-etcd-client\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177503 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cba3ef6a-8439-4317-bae9-01618d78512a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dv6hv\" (UID: \"cba3ef6a-8439-4317-bae9-01618d78512a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177586 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b35d0df8-53f0-4787-b0b4-c93be28f0127-trusted-ca-bundle\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177587 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177679 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqgz8\" (UniqueName: \"kubernetes.io/projected/8cc29c02-baeb-4f46-92d6-684343509ae1-kube-api-access-vqgz8\") pod \"dns-operator-744455d44c-vpnm5\" (UID: \"8cc29c02-baeb-4f46-92d6-684343509ae1\") " pod="openshift-dns-operator/dns-operator-744455d44c-vpnm5" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177681 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-audit-policies\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177699 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/639dacb9-2ea3-49d2-b5c4-996992c8e16a-etcd-service-ca\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177718 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxdbz\" (UniqueName: \"kubernetes.io/projected/639dacb9-2ea3-49d2-b5c4-996992c8e16a-kube-api-access-qxdbz\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177757 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvc79\" (UniqueName: \"kubernetes.io/projected/64e4fed2-f31d-4d3f-ad77-d55fdddacb4d-kube-api-access-nvc79\") pod \"ingress-operator-5b745b69d9-44wvz\" (UID: \"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177778 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/64e4fed2-f31d-4d3f-ad77-d55fdddacb4d-metrics-tls\") pod \"ingress-operator-5b745b69d9-44wvz\" (UID: \"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177794 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b35d0df8-53f0-4787-b0b4-c93be28f0127-console-config\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177816 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cc29c02-baeb-4f46-92d6-684343509ae1-metrics-tls\") pod \"dns-operator-744455d44c-vpnm5\" (UID: \"8cc29c02-baeb-4f46-92d6-684343509ae1\") " pod="openshift-dns-operator/dns-operator-744455d44c-vpnm5" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177859 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177877 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lsmw\" (UniqueName: \"kubernetes.io/projected/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-kube-api-access-9lsmw\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177895 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b35d0df8-53f0-4787-b0b4-c93be28f0127-service-ca\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177928 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cba3ef6a-8439-4317-bae9-01618d78512a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dv6hv\" (UID: \"cba3ef6a-8439-4317-bae9-01618d78512a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.178013 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177118 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-audit-dir\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177207 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.177077 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.178822 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b35d0df8-53f0-4787-b0b4-c93be28f0127-console-config\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.179494 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.181084 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.181092 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.181598 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.181768 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.183134 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b35d0df8-53f0-4787-b0b4-c93be28f0127-console-serving-cert\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.183705 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xprfh"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.184346 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.184480 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b35d0df8-53f0-4787-b0b4-c93be28f0127-console-oauth-config\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.185341 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.185383 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-c7j22"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.187120 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-f78bs"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.187575 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.188524 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-f78bs" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.189368 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.190517 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cc29c02-baeb-4f46-92d6-684343509ae1-metrics-tls\") pod \"dns-operator-744455d44c-vpnm5\" (UID: \"8cc29c02-baeb-4f46-92d6-684343509ae1\") " pod="openshift-dns-operator/dns-operator-744455d44c-vpnm5" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.192400 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.193750 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/639dacb9-2ea3-49d2-b5c4-996992c8e16a-serving-cert\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.194376 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.195697 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.197255 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wqf9f"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.198298 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b35d0df8-53f0-4787-b0b4-c93be28f0127-service-ca\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.198978 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ljds4"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.200631 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.201702 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.202905 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.204678 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-c7j22"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.206430 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bzk95"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.207914 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.209817 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.210914 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/639dacb9-2ea3-49d2-b5c4-996992c8e16a-etcd-client\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.211417 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rt2jr"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.212623 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kgqfp"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.214284 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cx8cr"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.215538 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.217578 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.218585 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.219689 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.220787 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-f78bs"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.221449 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.223847 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.224881 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-dmhxf"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.228424 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dmhxf" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.228956 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.230101 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639dacb9-2ea3-49d2-b5c4-996992c8e16a-config\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.233853 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hn5wx"] Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.242169 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.247248 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/639dacb9-2ea3-49d2-b5c4-996992c8e16a-etcd-ca\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.262173 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.269025 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/639dacb9-2ea3-49d2-b5c4-996992c8e16a-etcd-service-ca\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.280794 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.301345 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.320914 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.330092 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cba3ef6a-8439-4317-bae9-01618d78512a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dv6hv\" (UID: \"cba3ef6a-8439-4317-bae9-01618d78512a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.341097 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.348796 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cba3ef6a-8439-4317-bae9-01618d78512a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dv6hv\" (UID: \"cba3ef6a-8439-4317-bae9-01618d78512a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.361393 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.381559 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.401580 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.422105 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.441501 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.461904 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.472076 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/64e4fed2-f31d-4d3f-ad77-d55fdddacb4d-metrics-tls\") pod \"ingress-operator-5b745b69d9-44wvz\" (UID: \"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.481608 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.501470 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.526853 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.542281 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.542281 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.542299 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.553187 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.555812 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64e4fed2-f31d-4d3f-ad77-d55fdddacb4d-trusted-ca\") pod \"ingress-operator-5b745b69d9-44wvz\" (UID: \"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.581296 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.602238 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.622876 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.641412 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.661477 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.681316 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.701794 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.720806 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.742595 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.762389 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.782884 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.802950 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.822791 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.842474 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.862047 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.881607 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.901993 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.938837 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r24m4\" (UniqueName: \"kubernetes.io/projected/922314ab-f199-4117-acab-bc641c1cda57-kube-api-access-r24m4\") pod \"openshift-apiserver-operator-796bbdcf4f-ncqs4\" (UID: \"922314ab-f199-4117-acab-bc641c1cda57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.956121 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvpv9\" (UniqueName: \"kubernetes.io/projected/edb14eaf-7738-4139-9b1b-9557e7e37ffc-kube-api-access-hvpv9\") pod \"cluster-samples-operator-665b6dd947-vkrgj\" (UID: \"edb14eaf-7738-4139-9b1b-9557e7e37ffc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj" Jan 31 09:03:23 crc kubenswrapper[4732]: I0131 09:03:23.975923 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c9233d-0b27-4994-bc3d-62d4ec86a4ec-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r9cz8\" (UID: \"16c9233d-0b27-4994-bc3d-62d4ec86a4ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.011075 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.012319 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc5nb\" (UniqueName: \"kubernetes.io/projected/62bbd0d6-eba8-4737-9608-3f7a3dd6a157-kube-api-access-lc5nb\") pod \"machine-approver-56656f9798-sf8wd\" (UID: \"62bbd0d6-eba8-4737-9608-3f7a3dd6a157\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.023199 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.036403 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.042692 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.062126 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.097562 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdzrp\" (UniqueName: \"kubernetes.io/projected/81e1781e-a935-4f3f-b2aa-9a0807f43c73-kube-api-access-wdzrp\") pod \"downloads-7954f5f757-rt2jr\" (UID: \"81e1781e-a935-4f3f-b2aa-9a0807f43c73\") " pod="openshift-console/downloads-7954f5f757-rt2jr" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.100100 4732 request.go:700] Waited for 1.015520274s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.105349 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.118375 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxjwq\" (UniqueName: \"kubernetes.io/projected/e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a-kube-api-access-lxjwq\") pod \"machine-api-operator-5694c8668f-54nxd\" (UID: \"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.121333 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.125708 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rt2jr" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.141961 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.171848 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.175375 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p24t7\" (UniqueName: \"kubernetes.io/projected/38051ff1-1715-41dd-aa28-53aea32c8e05-kube-api-access-p24t7\") pod \"authentication-operator-69f744f599-pxn6w\" (UID: \"38051ff1-1715-41dd-aa28-53aea32c8e05\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.197955 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr859\" (UniqueName: \"kubernetes.io/projected/499830ff-8add-4caf-b469-d1cbde569fb7-kube-api-access-jr859\") pod \"console-operator-58897d9998-76d6v\" (UID: \"499830ff-8add-4caf-b469-d1cbde569fb7\") " pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.209939 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.226164 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch8m2\" (UniqueName: \"kubernetes.io/projected/576b5a44-3c4c-4905-8d89-caed3b1eb43f-kube-api-access-ch8m2\") pod \"openshift-config-operator-7777fb866f-hzs92\" (UID: \"576b5a44-3c4c-4905-8d89-caed3b1eb43f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.243164 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bws5c\" (UniqueName: \"kubernetes.io/projected/219a04b6-e7bd-4138-bcc7-4f650537aa24-kube-api-access-bws5c\") pod \"controller-manager-879f6c89f-tg4xc\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.243708 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.269805 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.275876 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" event={"ID":"62bbd0d6-eba8-4737-9608-3f7a3dd6a157","Type":"ContainerStarted","Data":"cbd42d42f8482009793ac43a3ebdd0114adc30dd90602a90c07790e0ec9ac076"} Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.278273 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhhgt\" (UniqueName: \"kubernetes.io/projected/541ea3c2-891c-4c3e-81fd-9d340112c62b-kube-api-access-jhhgt\") pod \"route-controller-manager-6576b87f9c-5s6dp\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.300966 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9mtk\" (UniqueName: \"kubernetes.io/projected/81b523ca-b564-45d4-bad5-f7e236f2e6d0-kube-api-access-s9mtk\") pod \"apiserver-76f77b778f-xprfh\" (UID: \"81b523ca-b564-45d4-bad5-f7e236f2e6d0\") " pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.322391 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9ndf\" (UniqueName: \"kubernetes.io/projected/6377a401-b10b-455a-8906-f6706302b91f-kube-api-access-v9ndf\") pod \"apiserver-7bbb656c7d-85mvk\" (UID: \"6377a401-b10b-455a-8906-f6706302b91f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.351910 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.355351 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf682\" (UniqueName: \"kubernetes.io/projected/16c9233d-0b27-4994-bc3d-62d4ec86a4ec-kube-api-access-xf682\") pod \"cluster-image-registry-operator-dc59b4c8b-r9cz8\" (UID: \"16c9233d-0b27-4994-bc3d-62d4ec86a4ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.361594 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.374076 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.380937 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.385554 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.401899 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.414094 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.416105 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.422301 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.437736 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.440890 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4"] Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.447182 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.449351 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rt2jr"] Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.450551 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pxn6w"] Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.451102 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj"] Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.461425 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 09:03:24 crc kubenswrapper[4732]: W0131 09:03:24.461634 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81e1781e_a935_4f3f_b2aa_9a0807f43c73.slice/crio-69d539fbc0bd1b9a5ce51d70edcc880d02dc1cb8c8cb69d25652009a0ba0e8ef WatchSource:0}: Error finding container 69d539fbc0bd1b9a5ce51d70edcc880d02dc1cb8c8cb69d25652009a0ba0e8ef: Status 404 returned error can't find the container with id 69d539fbc0bd1b9a5ce51d70edcc880d02dc1cb8c8cb69d25652009a0ba0e8ef Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.473136 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hzs92"] Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.488570 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.502398 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.517848 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.521729 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.541699 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.544970 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.560627 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-54nxd"] Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.563213 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.581761 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.605243 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.622193 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.643074 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.663415 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.664732 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-76d6v"] Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.682256 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 09:03:24 crc kubenswrapper[4732]: W0131 09:03:24.687052 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod499830ff_8add_4caf_b469_d1cbde569fb7.slice/crio-641c54243ba760d90f524d9c623ae4bf9adfd23728fc476c0d4306671ed6afb3 WatchSource:0}: Error finding container 641c54243ba760d90f524d9c623ae4bf9adfd23728fc476c0d4306671ed6afb3: Status 404 returned error can't find the container with id 641c54243ba760d90f524d9c623ae4bf9adfd23728fc476c0d4306671ed6afb3 Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.721248 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.722486 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.733876 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tg4xc"] Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.741277 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.761321 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 09:03:24 crc kubenswrapper[4732]: W0131 09:03:24.765987 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod219a04b6_e7bd_4138_bcc7_4f650537aa24.slice/crio-371f900003f95ceb406307c3e3064fc3164d972cb97c8e25812cb0e57334cb37 WatchSource:0}: Error finding container 371f900003f95ceb406307c3e3064fc3164d972cb97c8e25812cb0e57334cb37: Status 404 returned error can't find the container with id 371f900003f95ceb406307c3e3064fc3164d972cb97c8e25812cb0e57334cb37 Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.782630 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.802101 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.821835 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.841915 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.845848 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xprfh"] Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.861837 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.881545 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.884888 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk"] Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.900564 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp"] Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.901579 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.902028 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8"] Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.922118 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.940971 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 09:03:24 crc kubenswrapper[4732]: W0131 09:03:24.960498 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6377a401_b10b_455a_8906_f6706302b91f.slice/crio-d339f7b2a8861419aa4238e205cfaa9dcdfd934fadeaf613a8eadba1a515f45e WatchSource:0}: Error finding container d339f7b2a8861419aa4238e205cfaa9dcdfd934fadeaf613a8eadba1a515f45e: Status 404 returned error can't find the container with id d339f7b2a8861419aa4238e205cfaa9dcdfd934fadeaf613a8eadba1a515f45e Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.961692 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 09:03:24 crc kubenswrapper[4732]: W0131 09:03:24.962881 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541ea3c2_891c_4c3e_81fd_9d340112c62b.slice/crio-2ecd69d7eb9bd214f9aff82913ddef7756830d15321e64e3fd947685beb0e5f0 WatchSource:0}: Error finding container 2ecd69d7eb9bd214f9aff82913ddef7756830d15321e64e3fd947685beb0e5f0: Status 404 returned error can't find the container with id 2ecd69d7eb9bd214f9aff82913ddef7756830d15321e64e3fd947685beb0e5f0 Jan 31 09:03:24 crc kubenswrapper[4732]: I0131 09:03:24.981722 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.002055 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.021491 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.043745 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.061384 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.081638 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.119531 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz728\" (UniqueName: \"kubernetes.io/projected/b35d0df8-53f0-4787-b0b4-c93be28f0127-kube-api-access-sz728\") pod \"console-f9d7485db-8t8ks\" (UID: \"b35d0df8-53f0-4787-b0b4-c93be28f0127\") " pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.119705 4732 request.go:700] Waited for 1.941511015s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.136453 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqgz8\" (UniqueName: \"kubernetes.io/projected/8cc29c02-baeb-4f46-92d6-684343509ae1-kube-api-access-vqgz8\") pod \"dns-operator-744455d44c-vpnm5\" (UID: \"8cc29c02-baeb-4f46-92d6-684343509ae1\") " pod="openshift-dns-operator/dns-operator-744455d44c-vpnm5" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.160720 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxdbz\" (UniqueName: \"kubernetes.io/projected/639dacb9-2ea3-49d2-b5c4-996992c8e16a-kube-api-access-qxdbz\") pod \"etcd-operator-b45778765-fsss9\" (UID: \"639dacb9-2ea3-49d2-b5c4-996992c8e16a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.181269 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvc79\" (UniqueName: \"kubernetes.io/projected/64e4fed2-f31d-4d3f-ad77-d55fdddacb4d-kube-api-access-nvc79\") pod \"ingress-operator-5b745b69d9-44wvz\" (UID: \"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.197271 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64e4fed2-f31d-4d3f-ad77-d55fdddacb4d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-44wvz\" (UID: \"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.216065 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lsmw\" (UniqueName: \"kubernetes.io/projected/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-kube-api-access-9lsmw\") pod \"oauth-openshift-558db77b4-c8t6l\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.234795 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cba3ef6a-8439-4317-bae9-01618d78512a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dv6hv\" (UID: \"cba3ef6a-8439-4317-bae9-01618d78512a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.241596 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.261379 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.280876 4732 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.283505 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" event={"ID":"576b5a44-3c4c-4905-8d89-caed3b1eb43f","Type":"ContainerStarted","Data":"5eb2173f2f321ecef00d9ddbca74e917d0284c4b6d0c5c9a6a07b3f57416a7ef"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.283551 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" event={"ID":"576b5a44-3c4c-4905-8d89-caed3b1eb43f","Type":"ContainerStarted","Data":"57238662a5c119eb0c282562328b4dd2ebd61b348f8f10d57093c2429537bfd7"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.285443 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xprfh" event={"ID":"81b523ca-b564-45d4-bad5-f7e236f2e6d0","Type":"ContainerStarted","Data":"b938d3308a52bdac0bd04510a4849e86ea98820183607b27ed9d059b6063330b"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.287385 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" event={"ID":"219a04b6-e7bd-4138-bcc7-4f650537aa24","Type":"ContainerStarted","Data":"371f900003f95ceb406307c3e3064fc3164d972cb97c8e25812cb0e57334cb37"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.295291 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" event={"ID":"541ea3c2-891c-4c3e-81fd-9d340112c62b","Type":"ContainerStarted","Data":"2ecd69d7eb9bd214f9aff82913ddef7756830d15321e64e3fd947685beb0e5f0"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.298200 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" event={"ID":"922314ab-f199-4117-acab-bc641c1cda57","Type":"ContainerStarted","Data":"2a30e968ee3788a3dfb596a6a4ef6e3604ffde568b635c079c4305de05cec711"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.298232 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" event={"ID":"922314ab-f199-4117-acab-bc641c1cda57","Type":"ContainerStarted","Data":"97ea83c954c01d03343eaa1002c3f9f954bc16cc777328cd201bd741063a8167"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.299681 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" event={"ID":"16c9233d-0b27-4994-bc3d-62d4ec86a4ec","Type":"ContainerStarted","Data":"3aa19a38da94888e1dd8da8784f9963d0705154dc8c4811ec56c194d07b436e7"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.301005 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.301282 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" event={"ID":"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a","Type":"ContainerStarted","Data":"dd6a20990ae3fd22e8557c81b6269326055d1e8ccf74975eda50405b7a5efb1f"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.301314 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" event={"ID":"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a","Type":"ContainerStarted","Data":"59a0a262020b0038e7bf24dab0914d3a017967ef6e3d45740704beb5029adfeb"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.302878 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" event={"ID":"62bbd0d6-eba8-4737-9608-3f7a3dd6a157","Type":"ContainerStarted","Data":"ac6d8de23a15f21cf46244c3780fb205834aa9873e224185ee55a963dedc4550"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.303884 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" event={"ID":"38051ff1-1715-41dd-aa28-53aea32c8e05","Type":"ContainerStarted","Data":"16afdb2e2edaf902175b439aaa6cd5fe6fd9707673a88258a80788c960c9e523"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.303914 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" event={"ID":"38051ff1-1715-41dd-aa28-53aea32c8e05","Type":"ContainerStarted","Data":"6c603e1663f65b4e996a658c7c5999f24b23f3d861e541eb09c170b40d6fee9b"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.306451 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" event={"ID":"6377a401-b10b-455a-8906-f6706302b91f","Type":"ContainerStarted","Data":"d339f7b2a8861419aa4238e205cfaa9dcdfd934fadeaf613a8eadba1a515f45e"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.307808 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj" event={"ID":"edb14eaf-7738-4139-9b1b-9557e7e37ffc","Type":"ContainerStarted","Data":"dbeb278dd30abd13a8b2ea913fbd368c73fcf7210a7b11439c474d77cb50a386"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.309637 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rt2jr" event={"ID":"81e1781e-a935-4f3f-b2aa-9a0807f43c73","Type":"ContainerStarted","Data":"40c260cfcf19a820277cc939bbd3087f26b08b3f91cab04be0242fa82609ec1b"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.309683 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rt2jr" event={"ID":"81e1781e-a935-4f3f-b2aa-9a0807f43c73","Type":"ContainerStarted","Data":"69d539fbc0bd1b9a5ce51d70edcc880d02dc1cb8c8cb69d25652009a0ba0e8ef"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.309892 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-rt2jr" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.310756 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-76d6v" event={"ID":"499830ff-8add-4caf-b469-d1cbde569fb7","Type":"ContainerStarted","Data":"e422637523d7d6fb77c3fa0f61232c3f748e50892051351e56744b31456a2e86"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.310784 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-76d6v" event={"ID":"499830ff-8add-4caf-b469-d1cbde569fb7","Type":"ContainerStarted","Data":"641c54243ba760d90f524d9c623ae4bf9adfd23728fc476c0d4306671ed6afb3"} Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.310942 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.311512 4732 patch_prober.go:28] interesting pod/downloads-7954f5f757-rt2jr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.311560 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rt2jr" podUID="81e1781e-a935-4f3f-b2aa-9a0807f43c73" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.311879 4732 patch_prober.go:28] interesting pod/console-operator-58897d9998-76d6v container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.311905 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-76d6v" podUID="499830ff-8add-4caf-b469-d1cbde569fb7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.325050 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.333109 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.340961 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.343889 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.358645 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vpnm5" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.361224 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.366866 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.374544 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.381818 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.389878 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.403768 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.422723 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.443020 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.461307 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.490914 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.531845 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4ac602fa-14af-4ae0-a538-d73e938db036-installation-pull-secrets\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.531896 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/075f442e-a691-4856-a6ea-e21f1dcbcb20-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zcd2h\" (UID: \"075f442e-a691-4856-a6ea-e21f1dcbcb20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.531945 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-registry-tls\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.531971 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnpjp\" (UniqueName: \"kubernetes.io/projected/075f442e-a691-4856-a6ea-e21f1dcbcb20-kube-api-access-hnpjp\") pod \"kube-storage-version-migrator-operator-b67b599dd-zcd2h\" (UID: \"075f442e-a691-4856-a6ea-e21f1dcbcb20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.532046 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-bound-sa-token\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.532261 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ac602fa-14af-4ae0-a538-d73e938db036-trusted-ca\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.532386 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4ac602fa-14af-4ae0-a538-d73e938db036-ca-trust-extracted\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.532471 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb2p9\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-kube-api-access-bb2p9\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.532559 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.532607 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4ac602fa-14af-4ae0-a538-d73e938db036-registry-certificates\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.532633 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/075f442e-a691-4856-a6ea-e21f1dcbcb20-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zcd2h\" (UID: \"075f442e-a691-4856-a6ea-e21f1dcbcb20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" Jan 31 09:03:25 crc kubenswrapper[4732]: E0131 09:03:25.533037 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.033015598 +0000 UTC m=+144.338891792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.541511 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.562346 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.634042 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:25 crc kubenswrapper[4732]: E0131 09:03:25.634447 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.13442105 +0000 UTC m=+144.440297254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.634499 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/075f442e-a691-4856-a6ea-e21f1dcbcb20-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zcd2h\" (UID: \"075f442e-a691-4856-a6ea-e21f1dcbcb20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.634541 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c0ada8b-e2dc-418c-a43e-33789285388f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4sk2n\" (UID: \"7c0ada8b-e2dc-418c-a43e-33789285388f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.635281 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/075f442e-a691-4856-a6ea-e21f1dcbcb20-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zcd2h\" (UID: \"075f442e-a691-4856-a6ea-e21f1dcbcb20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.635335 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a680afa-dc56-4bf8-808c-b1c947c8fbf0-apiservice-cert\") pod \"packageserver-d55dfcdfc-6fwnh\" (UID: \"6a680afa-dc56-4bf8-808c-b1c947c8fbf0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.635459 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j96n4\" (UniqueName: \"kubernetes.io/projected/498d64fc-0d0f-43c6-aaae-bd3c5f0d7873-kube-api-access-j96n4\") pod \"control-plane-machine-set-operator-78cbb6b69f-v69mc\" (UID: \"498d64fc-0d0f-43c6-aaae-bd3c5f0d7873\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.635543 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-bound-sa-token\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.635724 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-plugins-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.635844 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b99fdc-6d61-46e1-b093-b1b92efce54c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xgh7\" (UID: \"12b99fdc-6d61-46e1-b093-b1b92efce54c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.635875 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6a680afa-dc56-4bf8-808c-b1c947c8fbf0-tmpfs\") pod \"packageserver-d55dfcdfc-6fwnh\" (UID: \"6a680afa-dc56-4bf8-808c-b1c947c8fbf0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.635919 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33169e52-3fee-462c-b341-46563ddbf5aa-serving-cert\") pod \"service-ca-operator-777779d784-bzk95\" (UID: \"33169e52-3fee-462c-b341-46563ddbf5aa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636009 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s48bl\" (UniqueName: \"kubernetes.io/projected/56a9a21e-28d2-4386-9fe0-947c3a39ab6a-kube-api-access-s48bl\") pod \"migrator-59844c95c7-cx8cr\" (UID: \"56a9a21e-28d2-4386-9fe0-947c3a39ab6a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx8cr" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636053 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8814e7c8-5104-40f7-9761-4feedc15697b-stats-auth\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636093 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffad13f5-fb20-46ac-b886-c7e5a29b6599-config\") pod \"kube-controller-manager-operator-78b949d7b-bpncg\" (UID: \"ffad13f5-fb20-46ac-b886-c7e5a29b6599\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636126 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hprwd\" (UniqueName: \"kubernetes.io/projected/caaa7607-b47d-43ca-adff-f9135baf7262-kube-api-access-hprwd\") pod \"package-server-manager-789f6589d5-wq7bg\" (UID: \"caaa7607-b47d-43ca-adff-f9135baf7262\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636207 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w46qw\" (UniqueName: \"kubernetes.io/projected/a3302e69-0f73-4974-a8ac-af1992933147-kube-api-access-w46qw\") pod \"machine-config-controller-84d6567774-llpv8\" (UID: \"a3302e69-0f73-4974-a8ac-af1992933147\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636237 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54-config\") pod \"kube-apiserver-operator-766d6c64bb-4qmgq\" (UID: \"e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636290 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwh5d\" (UniqueName: \"kubernetes.io/projected/12b99fdc-6d61-46e1-b093-b1b92efce54c-kube-api-access-xwh5d\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xgh7\" (UID: \"12b99fdc-6d61-46e1-b093-b1b92efce54c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636336 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4ac602fa-14af-4ae0-a538-d73e938db036-ca-trust-extracted\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636381 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a3302e69-0f73-4974-a8ac-af1992933147-proxy-tls\") pod \"machine-config-controller-84d6567774-llpv8\" (UID: \"a3302e69-0f73-4974-a8ac-af1992933147\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636406 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jzrl\" (UniqueName: \"kubernetes.io/projected/4149606a-3fbc-4da9-ba05-dc473b492a89-kube-api-access-2jzrl\") pod \"service-ca-9c57cc56f-wqf9f\" (UID: \"4149606a-3fbc-4da9-ba05-dc473b492a89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636450 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb2p9\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-kube-api-access-bb2p9\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636497 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffad13f5-fb20-46ac-b886-c7e5a29b6599-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bpncg\" (UID: \"ffad13f5-fb20-46ac-b886-c7e5a29b6599\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636546 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67nvg\" (UniqueName: \"kubernetes.io/projected/f4d0ed50-aa9b-4a62-b340-882ddf73f008-kube-api-access-67nvg\") pod \"marketplace-operator-79b997595-ljds4\" (UID: \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\") " pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636703 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4149606a-3fbc-4da9-ba05-dc473b492a89-signing-key\") pod \"service-ca-9c57cc56f-wqf9f\" (UID: \"4149606a-3fbc-4da9-ba05-dc473b492a89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636767 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33169e52-3fee-462c-b341-46563ddbf5aa-config\") pod \"service-ca-operator-777779d784-bzk95\" (UID: \"33169e52-3fee-462c-b341-46563ddbf5aa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636879 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1e6408b2-d73d-4fc3-86f7-d29ea59ad32e-cert\") pod \"ingress-canary-hn5wx\" (UID: \"1e6408b2-d73d-4fc3-86f7-d29ea59ad32e\") " pod="openshift-ingress-canary/ingress-canary-hn5wx" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636908 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4ac602fa-14af-4ae0-a538-d73e938db036-ca-trust-extracted\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.636916 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0558933a-c8d6-45dc-aeaf-af86190b15a0-config-volume\") pod \"collect-profiles-29497500-p2gn7\" (UID: \"0558933a-c8d6-45dc-aeaf-af86190b15a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.637085 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a680afa-dc56-4bf8-808c-b1c947c8fbf0-webhook-cert\") pod \"packageserver-d55dfcdfc-6fwnh\" (UID: \"6a680afa-dc56-4bf8-808c-b1c947c8fbf0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.637182 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/058f5386-f340-4a52-bfc8-9b1a60515c9b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7fqsl\" (UID: \"058f5386-f340-4a52-bfc8-9b1a60515c9b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.637244 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.637289 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8814e7c8-5104-40f7-9761-4feedc15697b-metrics-certs\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.637343 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llg68\" (UniqueName: \"kubernetes.io/projected/1d02584d-db7d-4bc0-8cd9-33081993309b-kube-api-access-llg68\") pod \"catalog-operator-68c6474976-wb8wq\" (UID: \"1d02584d-db7d-4bc0-8cd9-33081993309b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.637365 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzgzf\" (UniqueName: \"kubernetes.io/projected/eb80e0fc-6378-4d0d-a8b0-6c662073ed4d-kube-api-access-wzgzf\") pod \"dns-default-f78bs\" (UID: \"eb80e0fc-6378-4d0d-a8b0-6c662073ed4d\") " pod="openshift-dns/dns-default-f78bs" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.637434 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4ac602fa-14af-4ae0-a538-d73e938db036-registry-certificates\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.637466 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-csi-data-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.637484 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b99fdc-6d61-46e1-b093-b1b92efce54c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xgh7\" (UID: \"12b99fdc-6d61-46e1-b093-b1b92efce54c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.637508 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4qmgq\" (UID: \"e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.637525 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb72h\" (UniqueName: \"kubernetes.io/projected/33169e52-3fee-462c-b341-46563ddbf5aa-kube-api-access-rb72h\") pod \"service-ca-operator-777779d784-bzk95\" (UID: \"33169e52-3fee-462c-b341-46563ddbf5aa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" Jan 31 09:03:25 crc kubenswrapper[4732]: E0131 09:03:25.637687 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.137634938 +0000 UTC m=+144.443511142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.637871 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4ac602fa-14af-4ae0-a538-d73e938db036-installation-pull-secrets\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.638080 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knh8t\" (UniqueName: \"kubernetes.io/projected/8814e7c8-5104-40f7-9761-4feedc15697b-kube-api-access-knh8t\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.638185 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtvhl\" (UniqueName: \"kubernetes.io/projected/d64c27a7-b418-450e-9067-dde0cd145597-kube-api-access-xtvhl\") pod \"machine-config-server-dmhxf\" (UID: \"d64c27a7-b418-450e-9067-dde0cd145597\") " pod="openshift-machine-config-operator/machine-config-server-dmhxf" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.638405 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-registry-tls\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.638541 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/caaa7607-b47d-43ca-adff-f9135baf7262-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wq7bg\" (UID: \"caaa7607-b47d-43ca-adff-f9135baf7262\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.638591 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1d02584d-db7d-4bc0-8cd9-33081993309b-srv-cert\") pod \"catalog-operator-68c6474976-wb8wq\" (UID: \"1d02584d-db7d-4bc0-8cd9-33081993309b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.638635 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnpjp\" (UniqueName: \"kubernetes.io/projected/075f442e-a691-4856-a6ea-e21f1dcbcb20-kube-api-access-hnpjp\") pod \"kube-storage-version-migrator-operator-b67b599dd-zcd2h\" (UID: \"075f442e-a691-4856-a6ea-e21f1dcbcb20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.638753 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7c0ada8b-e2dc-418c-a43e-33789285388f-images\") pod \"machine-config-operator-74547568cd-4sk2n\" (UID: \"7c0ada8b-e2dc-418c-a43e-33789285388f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.638781 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4149606a-3fbc-4da9-ba05-dc473b492a89-signing-cabundle\") pod \"service-ca-9c57cc56f-wqf9f\" (UID: \"4149606a-3fbc-4da9-ba05-dc473b492a89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.638849 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/058f5386-f340-4a52-bfc8-9b1a60515c9b-srv-cert\") pod \"olm-operator-6b444d44fb-7fqsl\" (UID: \"058f5386-f340-4a52-bfc8-9b1a60515c9b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.638878 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm54m\" (UniqueName: \"kubernetes.io/projected/058f5386-f340-4a52-bfc8-9b1a60515c9b-kube-api-access-dm54m\") pod \"olm-operator-6b444d44fb-7fqsl\" (UID: \"058f5386-f340-4a52-bfc8-9b1a60515c9b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.638898 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8chpb\" (UniqueName: \"kubernetes.io/projected/7c0ada8b-e2dc-418c-a43e-33789285388f-kube-api-access-8chpb\") pod \"machine-config-operator-74547568cd-4sk2n\" (UID: \"7c0ada8b-e2dc-418c-a43e-33789285388f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.638937 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/498d64fc-0d0f-43c6-aaae-bd3c5f0d7873-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-v69mc\" (UID: \"498d64fc-0d0f-43c6-aaae-bd3c5f0d7873\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.638958 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3302e69-0f73-4974-a8ac-af1992933147-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-llpv8\" (UID: \"a3302e69-0f73-4974-a8ac-af1992933147\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.639022 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ac602fa-14af-4ae0-a538-d73e938db036-trusted-ca\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.639125 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffad13f5-fb20-46ac-b886-c7e5a29b6599-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bpncg\" (UID: \"ffad13f5-fb20-46ac-b886-c7e5a29b6599\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.639240 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0558933a-c8d6-45dc-aeaf-af86190b15a0-secret-volume\") pod \"collect-profiles-29497500-p2gn7\" (UID: \"0558933a-c8d6-45dc-aeaf-af86190b15a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.639266 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb80e0fc-6378-4d0d-a8b0-6c662073ed4d-config-volume\") pod \"dns-default-f78bs\" (UID: \"eb80e0fc-6378-4d0d-a8b0-6c662073ed4d\") " pod="openshift-dns/dns-default-f78bs" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.639317 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-mountpoint-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.639352 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8814e7c8-5104-40f7-9761-4feedc15697b-service-ca-bundle\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.639374 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d64c27a7-b418-450e-9067-dde0cd145597-certs\") pod \"machine-config-server-dmhxf\" (UID: \"d64c27a7-b418-450e-9067-dde0cd145597\") " pod="openshift-machine-config-operator/machine-config-server-dmhxf" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.639501 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d64c27a7-b418-450e-9067-dde0cd145597-node-bootstrap-token\") pod \"machine-config-server-dmhxf\" (UID: \"d64c27a7-b418-450e-9067-dde0cd145597\") " pod="openshift-machine-config-operator/machine-config-server-dmhxf" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.639522 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1d02584d-db7d-4bc0-8cd9-33081993309b-profile-collector-cert\") pod \"catalog-operator-68c6474976-wb8wq\" (UID: \"1d02584d-db7d-4bc0-8cd9-33081993309b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.639539 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4qmgq\" (UID: \"e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.639604 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4d0ed50-aa9b-4a62-b340-882ddf73f008-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ljds4\" (UID: \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\") " pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.640623 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4ac602fa-14af-4ae0-a538-d73e938db036-registry-certificates\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.640673 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8814e7c8-5104-40f7-9761-4feedc15697b-default-certificate\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.640714 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-socket-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.641117 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h7wn\" (UniqueName: \"kubernetes.io/projected/2325b276-ee4d-438d-b9d6-d7de3024ba96-kube-api-access-6h7wn\") pod \"multus-admission-controller-857f4d67dd-kgqfp\" (UID: \"2325b276-ee4d-438d-b9d6-d7de3024ba96\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kgqfp" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.641358 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ac602fa-14af-4ae0-a538-d73e938db036-trusted-ca\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.641526 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwqdf\" (UniqueName: \"kubernetes.io/projected/1e6408b2-d73d-4fc3-86f7-d29ea59ad32e-kube-api-access-rwqdf\") pod \"ingress-canary-hn5wx\" (UID: \"1e6408b2-d73d-4fc3-86f7-d29ea59ad32e\") " pod="openshift-ingress-canary/ingress-canary-hn5wx" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.643368 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eb80e0fc-6378-4d0d-a8b0-6c662073ed4d-metrics-tls\") pod \"dns-default-f78bs\" (UID: \"eb80e0fc-6378-4d0d-a8b0-6c662073ed4d\") " pod="openshift-dns/dns-default-f78bs" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.643409 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-registration-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.643443 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c0ada8b-e2dc-418c-a43e-33789285388f-proxy-tls\") pod \"machine-config-operator-74547568cd-4sk2n\" (UID: \"7c0ada8b-e2dc-418c-a43e-33789285388f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.643926 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/075f442e-a691-4856-a6ea-e21f1dcbcb20-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zcd2h\" (UID: \"075f442e-a691-4856-a6ea-e21f1dcbcb20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.643958 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh5t6\" (UniqueName: \"kubernetes.io/projected/89cec875-cd1f-4867-8b4b-ca72c57c974b-kube-api-access-dh5t6\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.643983 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4d0ed50-aa9b-4a62-b340-882ddf73f008-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ljds4\" (UID: \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\") " pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.644052 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2325b276-ee4d-438d-b9d6-d7de3024ba96-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kgqfp\" (UID: \"2325b276-ee4d-438d-b9d6-d7de3024ba96\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kgqfp" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.644161 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn2dm\" (UniqueName: \"kubernetes.io/projected/6a680afa-dc56-4bf8-808c-b1c947c8fbf0-kube-api-access-hn2dm\") pod \"packageserver-d55dfcdfc-6fwnh\" (UID: \"6a680afa-dc56-4bf8-808c-b1c947c8fbf0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.644340 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txg4c\" (UniqueName: \"kubernetes.io/projected/0558933a-c8d6-45dc-aeaf-af86190b15a0-kube-api-access-txg4c\") pod \"collect-profiles-29497500-p2gn7\" (UID: \"0558933a-c8d6-45dc-aeaf-af86190b15a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.646172 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4ac602fa-14af-4ae0-a538-d73e938db036-installation-pull-secrets\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.646235 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-registry-tls\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.647366 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/075f442e-a691-4856-a6ea-e21f1dcbcb20-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zcd2h\" (UID: \"075f442e-a691-4856-a6ea-e21f1dcbcb20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.681367 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-bound-sa-token\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.714974 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb2p9\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-kube-api-access-bb2p9\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.721362 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnpjp\" (UniqueName: \"kubernetes.io/projected/075f442e-a691-4856-a6ea-e21f1dcbcb20-kube-api-access-hnpjp\") pod \"kube-storage-version-migrator-operator-b67b599dd-zcd2h\" (UID: \"075f442e-a691-4856-a6ea-e21f1dcbcb20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.730843 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv"] Jan 31 09:03:25 crc kubenswrapper[4732]: W0131 09:03:25.742946 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcba3ef6a_8439_4317_bae9_01618d78512a.slice/crio-47909dae9b68f113cd9b76b94a15367d6632b514c137e8042997f03d94d0f645 WatchSource:0}: Error finding container 47909dae9b68f113cd9b76b94a15367d6632b514c137e8042997f03d94d0f645: Status 404 returned error can't find the container with id 47909dae9b68f113cd9b76b94a15367d6632b514c137e8042997f03d94d0f645 Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.748247 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:25 crc kubenswrapper[4732]: E0131 09:03:25.748451 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.248412816 +0000 UTC m=+144.554289020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749289 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a680afa-dc56-4bf8-808c-b1c947c8fbf0-webhook-cert\") pod \"packageserver-d55dfcdfc-6fwnh\" (UID: \"6a680afa-dc56-4bf8-808c-b1c947c8fbf0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749351 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/058f5386-f340-4a52-bfc8-9b1a60515c9b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7fqsl\" (UID: \"058f5386-f340-4a52-bfc8-9b1a60515c9b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749407 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749457 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8814e7c8-5104-40f7-9761-4feedc15697b-metrics-certs\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749524 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llg68\" (UniqueName: \"kubernetes.io/projected/1d02584d-db7d-4bc0-8cd9-33081993309b-kube-api-access-llg68\") pod \"catalog-operator-68c6474976-wb8wq\" (UID: \"1d02584d-db7d-4bc0-8cd9-33081993309b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749572 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzgzf\" (UniqueName: \"kubernetes.io/projected/eb80e0fc-6378-4d0d-a8b0-6c662073ed4d-kube-api-access-wzgzf\") pod \"dns-default-f78bs\" (UID: \"eb80e0fc-6378-4d0d-a8b0-6c662073ed4d\") " pod="openshift-dns/dns-default-f78bs" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749605 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-csi-data-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749646 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b99fdc-6d61-46e1-b093-b1b92efce54c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xgh7\" (UID: \"12b99fdc-6d61-46e1-b093-b1b92efce54c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749691 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4qmgq\" (UID: \"e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749712 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb72h\" (UniqueName: \"kubernetes.io/projected/33169e52-3fee-462c-b341-46563ddbf5aa-kube-api-access-rb72h\") pod \"service-ca-operator-777779d784-bzk95\" (UID: \"33169e52-3fee-462c-b341-46563ddbf5aa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749753 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knh8t\" (UniqueName: \"kubernetes.io/projected/8814e7c8-5104-40f7-9761-4feedc15697b-kube-api-access-knh8t\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749777 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/caaa7607-b47d-43ca-adff-f9135baf7262-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wq7bg\" (UID: \"caaa7607-b47d-43ca-adff-f9135baf7262\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749824 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtvhl\" (UniqueName: \"kubernetes.io/projected/d64c27a7-b418-450e-9067-dde0cd145597-kube-api-access-xtvhl\") pod \"machine-config-server-dmhxf\" (UID: \"d64c27a7-b418-450e-9067-dde0cd145597\") " pod="openshift-machine-config-operator/machine-config-server-dmhxf" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749850 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1d02584d-db7d-4bc0-8cd9-33081993309b-srv-cert\") pod \"catalog-operator-68c6474976-wb8wq\" (UID: \"1d02584d-db7d-4bc0-8cd9-33081993309b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749912 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4149606a-3fbc-4da9-ba05-dc473b492a89-signing-cabundle\") pod \"service-ca-9c57cc56f-wqf9f\" (UID: \"4149606a-3fbc-4da9-ba05-dc473b492a89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749945 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7c0ada8b-e2dc-418c-a43e-33789285388f-images\") pod \"machine-config-operator-74547568cd-4sk2n\" (UID: \"7c0ada8b-e2dc-418c-a43e-33789285388f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.749983 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8chpb\" (UniqueName: \"kubernetes.io/projected/7c0ada8b-e2dc-418c-a43e-33789285388f-kube-api-access-8chpb\") pod \"machine-config-operator-74547568cd-4sk2n\" (UID: \"7c0ada8b-e2dc-418c-a43e-33789285388f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750024 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/058f5386-f340-4a52-bfc8-9b1a60515c9b-srv-cert\") pod \"olm-operator-6b444d44fb-7fqsl\" (UID: \"058f5386-f340-4a52-bfc8-9b1a60515c9b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750061 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm54m\" (UniqueName: \"kubernetes.io/projected/058f5386-f340-4a52-bfc8-9b1a60515c9b-kube-api-access-dm54m\") pod \"olm-operator-6b444d44fb-7fqsl\" (UID: \"058f5386-f340-4a52-bfc8-9b1a60515c9b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750103 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/498d64fc-0d0f-43c6-aaae-bd3c5f0d7873-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-v69mc\" (UID: \"498d64fc-0d0f-43c6-aaae-bd3c5f0d7873\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750126 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3302e69-0f73-4974-a8ac-af1992933147-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-llpv8\" (UID: \"a3302e69-0f73-4974-a8ac-af1992933147\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750150 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffad13f5-fb20-46ac-b886-c7e5a29b6599-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bpncg\" (UID: \"ffad13f5-fb20-46ac-b886-c7e5a29b6599\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750175 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0558933a-c8d6-45dc-aeaf-af86190b15a0-secret-volume\") pod \"collect-profiles-29497500-p2gn7\" (UID: \"0558933a-c8d6-45dc-aeaf-af86190b15a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750193 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb80e0fc-6378-4d0d-a8b0-6c662073ed4d-config-volume\") pod \"dns-default-f78bs\" (UID: \"eb80e0fc-6378-4d0d-a8b0-6c662073ed4d\") " pod="openshift-dns/dns-default-f78bs" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750231 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-mountpoint-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750286 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8814e7c8-5104-40f7-9761-4feedc15697b-service-ca-bundle\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750314 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d64c27a7-b418-450e-9067-dde0cd145597-certs\") pod \"machine-config-server-dmhxf\" (UID: \"d64c27a7-b418-450e-9067-dde0cd145597\") " pod="openshift-machine-config-operator/machine-config-server-dmhxf" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750414 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d64c27a7-b418-450e-9067-dde0cd145597-node-bootstrap-token\") pod \"machine-config-server-dmhxf\" (UID: \"d64c27a7-b418-450e-9067-dde0cd145597\") " pod="openshift-machine-config-operator/machine-config-server-dmhxf" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750634 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1d02584d-db7d-4bc0-8cd9-33081993309b-profile-collector-cert\") pod \"catalog-operator-68c6474976-wb8wq\" (UID: \"1d02584d-db7d-4bc0-8cd9-33081993309b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750690 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-socket-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750717 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4qmgq\" (UID: \"e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750781 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4d0ed50-aa9b-4a62-b340-882ddf73f008-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ljds4\" (UID: \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\") " pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750834 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8814e7c8-5104-40f7-9761-4feedc15697b-default-certificate\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.750964 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h7wn\" (UniqueName: \"kubernetes.io/projected/2325b276-ee4d-438d-b9d6-d7de3024ba96-kube-api-access-6h7wn\") pod \"multus-admission-controller-857f4d67dd-kgqfp\" (UID: \"2325b276-ee4d-438d-b9d6-d7de3024ba96\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kgqfp" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751106 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-csi-data-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751048 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwqdf\" (UniqueName: \"kubernetes.io/projected/1e6408b2-d73d-4fc3-86f7-d29ea59ad32e-kube-api-access-rwqdf\") pod \"ingress-canary-hn5wx\" (UID: \"1e6408b2-d73d-4fc3-86f7-d29ea59ad32e\") " pod="openshift-ingress-canary/ingress-canary-hn5wx" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751197 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eb80e0fc-6378-4d0d-a8b0-6c662073ed4d-metrics-tls\") pod \"dns-default-f78bs\" (UID: \"eb80e0fc-6378-4d0d-a8b0-6c662073ed4d\") " pod="openshift-dns/dns-default-f78bs" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751280 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-registration-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751304 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c0ada8b-e2dc-418c-a43e-33789285388f-proxy-tls\") pod \"machine-config-operator-74547568cd-4sk2n\" (UID: \"7c0ada8b-e2dc-418c-a43e-33789285388f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:25 crc kubenswrapper[4732]: E0131 09:03:25.751336 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.251319234 +0000 UTC m=+144.557195438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751379 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh5t6\" (UniqueName: \"kubernetes.io/projected/89cec875-cd1f-4867-8b4b-ca72c57c974b-kube-api-access-dh5t6\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751418 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4d0ed50-aa9b-4a62-b340-882ddf73f008-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ljds4\" (UID: \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\") " pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751469 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2325b276-ee4d-438d-b9d6-d7de3024ba96-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kgqfp\" (UID: \"2325b276-ee4d-438d-b9d6-d7de3024ba96\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kgqfp" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751492 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn2dm\" (UniqueName: \"kubernetes.io/projected/6a680afa-dc56-4bf8-808c-b1c947c8fbf0-kube-api-access-hn2dm\") pod \"packageserver-d55dfcdfc-6fwnh\" (UID: \"6a680afa-dc56-4bf8-808c-b1c947c8fbf0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751517 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txg4c\" (UniqueName: \"kubernetes.io/projected/0558933a-c8d6-45dc-aeaf-af86190b15a0-kube-api-access-txg4c\") pod \"collect-profiles-29497500-p2gn7\" (UID: \"0558933a-c8d6-45dc-aeaf-af86190b15a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751547 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c0ada8b-e2dc-418c-a43e-33789285388f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4sk2n\" (UID: \"7c0ada8b-e2dc-418c-a43e-33789285388f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751614 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a680afa-dc56-4bf8-808c-b1c947c8fbf0-apiservice-cert\") pod \"packageserver-d55dfcdfc-6fwnh\" (UID: \"6a680afa-dc56-4bf8-808c-b1c947c8fbf0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751643 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j96n4\" (UniqueName: \"kubernetes.io/projected/498d64fc-0d0f-43c6-aaae-bd3c5f0d7873-kube-api-access-j96n4\") pod \"control-plane-machine-set-operator-78cbb6b69f-v69mc\" (UID: \"498d64fc-0d0f-43c6-aaae-bd3c5f0d7873\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751677 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6a680afa-dc56-4bf8-808c-b1c947c8fbf0-tmpfs\") pod \"packageserver-d55dfcdfc-6fwnh\" (UID: \"6a680afa-dc56-4bf8-808c-b1c947c8fbf0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751781 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-plugins-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751805 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b99fdc-6d61-46e1-b093-b1b92efce54c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xgh7\" (UID: \"12b99fdc-6d61-46e1-b093-b1b92efce54c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751829 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s48bl\" (UniqueName: \"kubernetes.io/projected/56a9a21e-28d2-4386-9fe0-947c3a39ab6a-kube-api-access-s48bl\") pod \"migrator-59844c95c7-cx8cr\" (UID: \"56a9a21e-28d2-4386-9fe0-947c3a39ab6a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx8cr" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.751869 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33169e52-3fee-462c-b341-46563ddbf5aa-serving-cert\") pod \"service-ca-operator-777779d784-bzk95\" (UID: \"33169e52-3fee-462c-b341-46563ddbf5aa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.752184 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8814e7c8-5104-40f7-9761-4feedc15697b-stats-auth\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.752287 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffad13f5-fb20-46ac-b886-c7e5a29b6599-config\") pod \"kube-controller-manager-operator-78b949d7b-bpncg\" (UID: \"ffad13f5-fb20-46ac-b886-c7e5a29b6599\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.752314 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hprwd\" (UniqueName: \"kubernetes.io/projected/caaa7607-b47d-43ca-adff-f9135baf7262-kube-api-access-hprwd\") pod \"package-server-manager-789f6589d5-wq7bg\" (UID: \"caaa7607-b47d-43ca-adff-f9135baf7262\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.752344 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w46qw\" (UniqueName: \"kubernetes.io/projected/a3302e69-0f73-4974-a8ac-af1992933147-kube-api-access-w46qw\") pod \"machine-config-controller-84d6567774-llpv8\" (UID: \"a3302e69-0f73-4974-a8ac-af1992933147\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.752495 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwh5d\" (UniqueName: \"kubernetes.io/projected/12b99fdc-6d61-46e1-b093-b1b92efce54c-kube-api-access-xwh5d\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xgh7\" (UID: \"12b99fdc-6d61-46e1-b093-b1b92efce54c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.752526 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54-config\") pod \"kube-apiserver-operator-766d6c64bb-4qmgq\" (UID: \"e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.752592 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jzrl\" (UniqueName: \"kubernetes.io/projected/4149606a-3fbc-4da9-ba05-dc473b492a89-kube-api-access-2jzrl\") pod \"service-ca-9c57cc56f-wqf9f\" (UID: \"4149606a-3fbc-4da9-ba05-dc473b492a89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.752623 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a3302e69-0f73-4974-a8ac-af1992933147-proxy-tls\") pod \"machine-config-controller-84d6567774-llpv8\" (UID: \"a3302e69-0f73-4974-a8ac-af1992933147\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.752650 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffad13f5-fb20-46ac-b886-c7e5a29b6599-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bpncg\" (UID: \"ffad13f5-fb20-46ac-b886-c7e5a29b6599\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.752692 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67nvg\" (UniqueName: \"kubernetes.io/projected/f4d0ed50-aa9b-4a62-b340-882ddf73f008-kube-api-access-67nvg\") pod \"marketplace-operator-79b997595-ljds4\" (UID: \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\") " pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.752732 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4149606a-3fbc-4da9-ba05-dc473b492a89-signing-key\") pod \"service-ca-9c57cc56f-wqf9f\" (UID: \"4149606a-3fbc-4da9-ba05-dc473b492a89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.752916 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33169e52-3fee-462c-b341-46563ddbf5aa-config\") pod \"service-ca-operator-777779d784-bzk95\" (UID: \"33169e52-3fee-462c-b341-46563ddbf5aa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.752969 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1e6408b2-d73d-4fc3-86f7-d29ea59ad32e-cert\") pod \"ingress-canary-hn5wx\" (UID: \"1e6408b2-d73d-4fc3-86f7-d29ea59ad32e\") " pod="openshift-ingress-canary/ingress-canary-hn5wx" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.753030 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0558933a-c8d6-45dc-aeaf-af86190b15a0-config-volume\") pod \"collect-profiles-29497500-p2gn7\" (UID: \"0558933a-c8d6-45dc-aeaf-af86190b15a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.753557 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-socket-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.753919 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-mountpoint-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.754193 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-registration-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.754253 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b99fdc-6d61-46e1-b093-b1b92efce54c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xgh7\" (UID: \"12b99fdc-6d61-46e1-b093-b1b92efce54c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.754675 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb80e0fc-6378-4d0d-a8b0-6c662073ed4d-config-volume\") pod \"dns-default-f78bs\" (UID: \"eb80e0fc-6378-4d0d-a8b0-6c662073ed4d\") " pod="openshift-dns/dns-default-f78bs" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.754751 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4d0ed50-aa9b-4a62-b340-882ddf73f008-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ljds4\" (UID: \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\") " pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.754920 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3302e69-0f73-4974-a8ac-af1992933147-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-llpv8\" (UID: \"a3302e69-0f73-4974-a8ac-af1992933147\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.755041 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffad13f5-fb20-46ac-b886-c7e5a29b6599-config\") pod \"kube-controller-manager-operator-78b949d7b-bpncg\" (UID: \"ffad13f5-fb20-46ac-b886-c7e5a29b6599\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.756191 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/058f5386-f340-4a52-bfc8-9b1a60515c9b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7fqsl\" (UID: \"058f5386-f340-4a52-bfc8-9b1a60515c9b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.756341 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/89cec875-cd1f-4867-8b4b-ca72c57c974b-plugins-dir\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.757910 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/caaa7607-b47d-43ca-adff-f9135baf7262-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wq7bg\" (UID: \"caaa7607-b47d-43ca-adff-f9135baf7262\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.757970 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8814e7c8-5104-40f7-9761-4feedc15697b-metrics-certs\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.758298 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eb80e0fc-6378-4d0d-a8b0-6c662073ed4d-metrics-tls\") pod \"dns-default-f78bs\" (UID: \"eb80e0fc-6378-4d0d-a8b0-6c662073ed4d\") " pod="openshift-dns/dns-default-f78bs" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.758354 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d64c27a7-b418-450e-9067-dde0cd145597-node-bootstrap-token\") pod \"machine-config-server-dmhxf\" (UID: \"d64c27a7-b418-450e-9067-dde0cd145597\") " pod="openshift-machine-config-operator/machine-config-server-dmhxf" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.759437 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a3302e69-0f73-4974-a8ac-af1992933147-proxy-tls\") pod \"machine-config-controller-84d6567774-llpv8\" (UID: \"a3302e69-0f73-4974-a8ac-af1992933147\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.760250 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a680afa-dc56-4bf8-808c-b1c947c8fbf0-webhook-cert\") pod \"packageserver-d55dfcdfc-6fwnh\" (UID: \"6a680afa-dc56-4bf8-808c-b1c947c8fbf0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.760276 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4d0ed50-aa9b-4a62-b340-882ddf73f008-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ljds4\" (UID: \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\") " pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.761052 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8814e7c8-5104-40f7-9761-4feedc15697b-default-certificate\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.761790 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c0ada8b-e2dc-418c-a43e-33789285388f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4sk2n\" (UID: \"7c0ada8b-e2dc-418c-a43e-33789285388f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.761814 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33169e52-3fee-462c-b341-46563ddbf5aa-config\") pod \"service-ca-operator-777779d784-bzk95\" (UID: \"33169e52-3fee-462c-b341-46563ddbf5aa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.761874 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4qmgq\" (UID: \"e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.762026 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/058f5386-f340-4a52-bfc8-9b1a60515c9b-srv-cert\") pod \"olm-operator-6b444d44fb-7fqsl\" (UID: \"058f5386-f340-4a52-bfc8-9b1a60515c9b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.762613 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8814e7c8-5104-40f7-9761-4feedc15697b-service-ca-bundle\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.762710 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8814e7c8-5104-40f7-9761-4feedc15697b-stats-auth\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.762940 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0558933a-c8d6-45dc-aeaf-af86190b15a0-secret-volume\") pod \"collect-profiles-29497500-p2gn7\" (UID: \"0558933a-c8d6-45dc-aeaf-af86190b15a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.762939 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/498d64fc-0d0f-43c6-aaae-bd3c5f0d7873-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-v69mc\" (UID: \"498d64fc-0d0f-43c6-aaae-bd3c5f0d7873\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.763014 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0558933a-c8d6-45dc-aeaf-af86190b15a0-config-volume\") pod \"collect-profiles-29497500-p2gn7\" (UID: \"0558933a-c8d6-45dc-aeaf-af86190b15a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.763114 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1d02584d-db7d-4bc0-8cd9-33081993309b-profile-collector-cert\") pod \"catalog-operator-68c6474976-wb8wq\" (UID: \"1d02584d-db7d-4bc0-8cd9-33081993309b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.763136 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54-config\") pod \"kube-apiserver-operator-766d6c64bb-4qmgq\" (UID: \"e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.763456 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6a680afa-dc56-4bf8-808c-b1c947c8fbf0-tmpfs\") pod \"packageserver-d55dfcdfc-6fwnh\" (UID: \"6a680afa-dc56-4bf8-808c-b1c947c8fbf0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.763561 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d64c27a7-b418-450e-9067-dde0cd145597-certs\") pod \"machine-config-server-dmhxf\" (UID: \"d64c27a7-b418-450e-9067-dde0cd145597\") " pod="openshift-machine-config-operator/machine-config-server-dmhxf" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.763881 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffad13f5-fb20-46ac-b886-c7e5a29b6599-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bpncg\" (UID: \"ffad13f5-fb20-46ac-b886-c7e5a29b6599\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.764119 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1e6408b2-d73d-4fc3-86f7-d29ea59ad32e-cert\") pod \"ingress-canary-hn5wx\" (UID: \"1e6408b2-d73d-4fc3-86f7-d29ea59ad32e\") " pod="openshift-ingress-canary/ingress-canary-hn5wx" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.764171 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1d02584d-db7d-4bc0-8cd9-33081993309b-srv-cert\") pod \"catalog-operator-68c6474976-wb8wq\" (UID: \"1d02584d-db7d-4bc0-8cd9-33081993309b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.764625 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7c0ada8b-e2dc-418c-a43e-33789285388f-images\") pod \"machine-config-operator-74547568cd-4sk2n\" (UID: \"7c0ada8b-e2dc-418c-a43e-33789285388f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.764766 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4149606a-3fbc-4da9-ba05-dc473b492a89-signing-cabundle\") pod \"service-ca-9c57cc56f-wqf9f\" (UID: \"4149606a-3fbc-4da9-ba05-dc473b492a89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.765150 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a680afa-dc56-4bf8-808c-b1c947c8fbf0-apiservice-cert\") pod \"packageserver-d55dfcdfc-6fwnh\" (UID: \"6a680afa-dc56-4bf8-808c-b1c947c8fbf0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.765285 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2325b276-ee4d-438d-b9d6-d7de3024ba96-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kgqfp\" (UID: \"2325b276-ee4d-438d-b9d6-d7de3024ba96\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kgqfp" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.765631 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b99fdc-6d61-46e1-b093-b1b92efce54c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xgh7\" (UID: \"12b99fdc-6d61-46e1-b093-b1b92efce54c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.765729 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33169e52-3fee-462c-b341-46563ddbf5aa-serving-cert\") pod \"service-ca-operator-777779d784-bzk95\" (UID: \"33169e52-3fee-462c-b341-46563ddbf5aa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.766615 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4149606a-3fbc-4da9-ba05-dc473b492a89-signing-key\") pod \"service-ca-9c57cc56f-wqf9f\" (UID: \"4149606a-3fbc-4da9-ba05-dc473b492a89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.768399 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c0ada8b-e2dc-418c-a43e-33789285388f-proxy-tls\") pod \"machine-config-operator-74547568cd-4sk2n\" (UID: \"7c0ada8b-e2dc-418c-a43e-33789285388f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.795366 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb72h\" (UniqueName: \"kubernetes.io/projected/33169e52-3fee-462c-b341-46563ddbf5aa-kube-api-access-rb72h\") pod \"service-ca-operator-777779d784-bzk95\" (UID: \"33169e52-3fee-462c-b341-46563ddbf5aa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.816764 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llg68\" (UniqueName: \"kubernetes.io/projected/1d02584d-db7d-4bc0-8cd9-33081993309b-kube-api-access-llg68\") pod \"catalog-operator-68c6474976-wb8wq\" (UID: \"1d02584d-db7d-4bc0-8cd9-33081993309b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.838697 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzgzf\" (UniqueName: \"kubernetes.io/projected/eb80e0fc-6378-4d0d-a8b0-6c662073ed4d-kube-api-access-wzgzf\") pod \"dns-default-f78bs\" (UID: \"eb80e0fc-6378-4d0d-a8b0-6c662073ed4d\") " pod="openshift-dns/dns-default-f78bs" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.844432 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8t8ks"] Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.852005 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-f78bs" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.853840 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:25 crc kubenswrapper[4732]: E0131 09:03:25.854183 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.353969758 +0000 UTC m=+144.659845962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.854560 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:25 crc kubenswrapper[4732]: E0131 09:03:25.854935 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.35491771 +0000 UTC m=+144.660793904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.860965 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh5t6\" (UniqueName: \"kubernetes.io/projected/89cec875-cd1f-4867-8b4b-ca72c57c974b-kube-api-access-dh5t6\") pod \"csi-hostpathplugin-c7j22\" (UID: \"89cec875-cd1f-4867-8b4b-ca72c57c974b\") " pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.863744 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c8t6l"] Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.874300 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vpnm5"] Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.876315 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fsss9"] Jan 31 09:03:25 crc kubenswrapper[4732]: W0131 09:03:25.886982 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cc29c02_baeb_4f46_92d6_684343509ae1.slice/crio-028c79adb4156659acd73bab76af06e2c4665808280d16be70d725ea649b82b3 WatchSource:0}: Error finding container 028c79adb4156659acd73bab76af06e2c4665808280d16be70d725ea649b82b3: Status 404 returned error can't find the container with id 028c79adb4156659acd73bab76af06e2c4665808280d16be70d725ea649b82b3 Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.889096 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm54m\" (UniqueName: \"kubernetes.io/projected/058f5386-f340-4a52-bfc8-9b1a60515c9b-kube-api-access-dm54m\") pod \"olm-operator-6b444d44fb-7fqsl\" (UID: \"058f5386-f340-4a52-bfc8-9b1a60515c9b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.896274 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knh8t\" (UniqueName: \"kubernetes.io/projected/8814e7c8-5104-40f7-9761-4feedc15697b-kube-api-access-knh8t\") pod \"router-default-5444994796-h5q9f\" (UID: \"8814e7c8-5104-40f7-9761-4feedc15697b\") " pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.927305 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h7wn\" (UniqueName: \"kubernetes.io/projected/2325b276-ee4d-438d-b9d6-d7de3024ba96-kube-api-access-6h7wn\") pod \"multus-admission-controller-857f4d67dd-kgqfp\" (UID: \"2325b276-ee4d-438d-b9d6-d7de3024ba96\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kgqfp" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.947687 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz"] Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.951274 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4qmgq\" (UID: \"e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.956618 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:25 crc kubenswrapper[4732]: E0131 09:03:25.957053 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.457024786 +0000 UTC m=+144.762900990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.963166 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffad13f5-fb20-46ac-b886-c7e5a29b6599-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bpncg\" (UID: \"ffad13f5-fb20-46ac-b886-c7e5a29b6599\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.982145 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.985792 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jzrl\" (UniqueName: \"kubernetes.io/projected/4149606a-3fbc-4da9-ba05-dc473b492a89-kube-api-access-2jzrl\") pod \"service-ca-9c57cc56f-wqf9f\" (UID: \"4149606a-3fbc-4da9-ba05-dc473b492a89\") " pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.996399 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:25 crc kubenswrapper[4732]: I0131 09:03:25.998415 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67nvg\" (UniqueName: \"kubernetes.io/projected/f4d0ed50-aa9b-4a62-b340-882ddf73f008-kube-api-access-67nvg\") pod \"marketplace-operator-79b997595-ljds4\" (UID: \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\") " pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.011155 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.020499 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtvhl\" (UniqueName: \"kubernetes.io/projected/d64c27a7-b418-450e-9067-dde0cd145597-kube-api-access-xtvhl\") pod \"machine-config-server-dmhxf\" (UID: \"d64c27a7-b418-450e-9067-dde0cd145597\") " pod="openshift-machine-config-operator/machine-config-server-dmhxf" Jan 31 09:03:26 crc kubenswrapper[4732]: W0131 09:03:26.029502 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8814e7c8_5104_40f7_9761_4feedc15697b.slice/crio-810a7219fbe0602d94b66784880ef250eec1b53ec5dbf8b9e30ef55ffb36b889 WatchSource:0}: Error finding container 810a7219fbe0602d94b66784880ef250eec1b53ec5dbf8b9e30ef55ffb36b889: Status 404 returned error can't find the container with id 810a7219fbe0602d94b66784880ef250eec1b53ec5dbf8b9e30ef55ffb36b889 Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.035017 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kgqfp" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.046568 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hprwd\" (UniqueName: \"kubernetes.io/projected/caaa7607-b47d-43ca-adff-f9135baf7262-kube-api-access-hprwd\") pod \"package-server-manager-789f6589d5-wq7bg\" (UID: \"caaa7607-b47d-43ca-adff-f9135baf7262\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.058548 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.060102 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:26 crc kubenswrapper[4732]: E0131 09:03:26.060699 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.560653933 +0000 UTC m=+144.866530137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.064351 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.067109 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w46qw\" (UniqueName: \"kubernetes.io/projected/a3302e69-0f73-4974-a8ac-af1992933147-kube-api-access-w46qw\") pod \"machine-config-controller-84d6567774-llpv8\" (UID: \"a3302e69-0f73-4974-a8ac-af1992933147\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.069731 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.080397 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.081255 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwh5d\" (UniqueName: \"kubernetes.io/projected/12b99fdc-6d61-46e1-b093-b1b92efce54c-kube-api-access-xwh5d\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xgh7\" (UID: \"12b99fdc-6d61-46e1-b093-b1b92efce54c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.082832 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-f78bs"] Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.085062 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.095482 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.105657 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j96n4\" (UniqueName: \"kubernetes.io/projected/498d64fc-0d0f-43c6-aaae-bd3c5f0d7873-kube-api-access-j96n4\") pod \"control-plane-machine-set-operator-78cbb6b69f-v69mc\" (UID: \"498d64fc-0d0f-43c6-aaae-bd3c5f0d7873\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.115191 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.121931 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwqdf\" (UniqueName: \"kubernetes.io/projected/1e6408b2-d73d-4fc3-86f7-d29ea59ad32e-kube-api-access-rwqdf\") pod \"ingress-canary-hn5wx\" (UID: \"1e6408b2-d73d-4fc3-86f7-d29ea59ad32e\") " pod="openshift-ingress-canary/ingress-canary-hn5wx" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.140356 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-c7j22" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.157344 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn2dm\" (UniqueName: \"kubernetes.io/projected/6a680afa-dc56-4bf8-808c-b1c947c8fbf0-kube-api-access-hn2dm\") pod \"packageserver-d55dfcdfc-6fwnh\" (UID: \"6a680afa-dc56-4bf8-808c-b1c947c8fbf0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.159808 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dmhxf" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.160721 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txg4c\" (UniqueName: \"kubernetes.io/projected/0558933a-c8d6-45dc-aeaf-af86190b15a0-kube-api-access-txg4c\") pod \"collect-profiles-29497500-p2gn7\" (UID: \"0558933a-c8d6-45dc-aeaf-af86190b15a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.161081 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:26 crc kubenswrapper[4732]: E0131 09:03:26.161358 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.661329581 +0000 UTC m=+144.967205785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.161893 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:26 crc kubenswrapper[4732]: E0131 09:03:26.162554 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.662539742 +0000 UTC m=+144.968415946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.178708 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s48bl\" (UniqueName: \"kubernetes.io/projected/56a9a21e-28d2-4386-9fe0-947c3a39ab6a-kube-api-access-s48bl\") pod \"migrator-59844c95c7-cx8cr\" (UID: \"56a9a21e-28d2-4386-9fe0-947c3a39ab6a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx8cr" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.196906 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.217429 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8chpb\" (UniqueName: \"kubernetes.io/projected/7c0ada8b-e2dc-418c-a43e-33789285388f-kube-api-access-8chpb\") pod \"machine-config-operator-74547568cd-4sk2n\" (UID: \"7c0ada8b-e2dc-418c-a43e-33789285388f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.265396 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:26 crc kubenswrapper[4732]: E0131 09:03:26.265823 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.765801097 +0000 UTC m=+145.071677301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.303423 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.305519 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h"] Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.318310 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx8cr" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.326203 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.344863 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.350537 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.369806 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.371632 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl"] Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.380076 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" event={"ID":"cba3ef6a-8439-4317-bae9-01618d78512a","Type":"ContainerStarted","Data":"9c7588f616c64cc2c59120e413176a76c8f1894b15043d1fcba7f8966cddaa81"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.380147 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" event={"ID":"cba3ef6a-8439-4317-bae9-01618d78512a","Type":"ContainerStarted","Data":"47909dae9b68f113cd9b76b94a15367d6632b514c137e8042997f03d94d0f645"} Jan 31 09:03:26 crc kubenswrapper[4732]: E0131 09:03:26.380904 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.88088602 +0000 UTC m=+145.186762224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.400516 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.408370 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hn5wx" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.428948 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8t8ks" event={"ID":"b35d0df8-53f0-4787-b0b4-c93be28f0127","Type":"ContainerStarted","Data":"4a39f64f86fe6cb1367f5507d76711de9f59d5b40811435a28504bbb19c0815e"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.428999 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8t8ks" event={"ID":"b35d0df8-53f0-4787-b0b4-c93be28f0127","Type":"ContainerStarted","Data":"cc5cb0ee220b6ce5469a8d59a01d7b091efed469d2881653bd6be64789fc28e5"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.468872 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" event={"ID":"16c9233d-0b27-4994-bc3d-62d4ec86a4ec","Type":"ContainerStarted","Data":"2cfe53f46628fdc8be7a5a78124aacf0f46a3c4f4093665f812038778db568cc"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.470572 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:26 crc kubenswrapper[4732]: E0131 09:03:26.471517 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:26.971493588 +0000 UTC m=+145.277369792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.490124 4732 generic.go:334] "Generic (PLEG): container finished" podID="576b5a44-3c4c-4905-8d89-caed3b1eb43f" containerID="5eb2173f2f321ecef00d9ddbca74e917d0284c4b6d0c5c9a6a07b3f57416a7ef" exitCode=0 Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.490232 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" event={"ID":"576b5a44-3c4c-4905-8d89-caed3b1eb43f","Type":"ContainerDied","Data":"5eb2173f2f321ecef00d9ddbca74e917d0284c4b6d0c5c9a6a07b3f57416a7ef"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.496919 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" event={"ID":"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d","Type":"ContainerStarted","Data":"2c61b8b5174b73d10a499a6df7d2119e9ef1c1363920d5bf989b781c1adeee7f"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.504053 4732 generic.go:334] "Generic (PLEG): container finished" podID="81b523ca-b564-45d4-bad5-f7e236f2e6d0" containerID="859dd7d18198cae3e59b1016af2e0383557984484498ef61411a009cd59679f4" exitCode=0 Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.504173 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xprfh" event={"ID":"81b523ca-b564-45d4-bad5-f7e236f2e6d0","Type":"ContainerDied","Data":"859dd7d18198cae3e59b1016af2e0383557984484498ef61411a009cd59679f4"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.515547 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bzk95"] Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.522919 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" event={"ID":"62bbd0d6-eba8-4737-9608-3f7a3dd6a157","Type":"ContainerStarted","Data":"874ed60b00db00813c1358ba0365c34806d02993922ccb884b227b0615d530da"} Jan 31 09:03:26 crc kubenswrapper[4732]: W0131 09:03:26.534020 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod058f5386_f340_4a52_bfc8_9b1a60515c9b.slice/crio-6fd5aabbe2ed81a7a78020aa8ca21834975dae45d93b7e0645ab08ab12f1815f WatchSource:0}: Error finding container 6fd5aabbe2ed81a7a78020aa8ca21834975dae45d93b7e0645ab08ab12f1815f: Status 404 returned error can't find the container with id 6fd5aabbe2ed81a7a78020aa8ca21834975dae45d93b7e0645ab08ab12f1815f Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.534875 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ncqs4" podStartSLOduration=119.534852131 podStartE2EDuration="1m59.534852131s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:26.534406446 +0000 UTC m=+144.840282660" watchObservedRunningTime="2026-01-31 09:03:26.534852131 +0000 UTC m=+144.840728335" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.576604 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:26 crc kubenswrapper[4732]: E0131 09:03:26.577521 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:27.077503427 +0000 UTC m=+145.383379631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.590280 4732 generic.go:334] "Generic (PLEG): container finished" podID="6377a401-b10b-455a-8906-f6706302b91f" containerID="75d4dfe8017617c8bc39777fe7f724ffac557d97a7569ca193cf719ee8c8ddf5" exitCode=0 Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.595468 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" event={"ID":"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20","Type":"ContainerStarted","Data":"bf0aacb740607afdcd33e43432dcaec43c8aa3d7707aec7cab5cbf845309020a"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.595535 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" event={"ID":"6377a401-b10b-455a-8906-f6706302b91f","Type":"ContainerDied","Data":"75d4dfe8017617c8bc39777fe7f724ffac557d97a7569ca193cf719ee8c8ddf5"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.595557 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kgqfp"] Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.639012 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dv6hv" podStartSLOduration=119.638991616 podStartE2EDuration="1m59.638991616s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:26.638151477 +0000 UTC m=+144.944027691" watchObservedRunningTime="2026-01-31 09:03:26.638991616 +0000 UTC m=+144.944867820" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.680216 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj" event={"ID":"edb14eaf-7738-4139-9b1b-9557e7e37ffc","Type":"ContainerStarted","Data":"e184ef399f504c65a18a1317d6c06296c3275341759143fd1ecc5c11e6dab7ea"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.680289 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj" event={"ID":"edb14eaf-7738-4139-9b1b-9557e7e37ffc","Type":"ContainerStarted","Data":"6b2deccfb5f751a56925281b353e90624d7859e638863238b0271bd0f7231b87"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.694748 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:26 crc kubenswrapper[4732]: E0131 09:03:26.694967 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:27.194926178 +0000 UTC m=+145.500802392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.702086 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.706399 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-f78bs" event={"ID":"eb80e0fc-6378-4d0d-a8b0-6c662073ed4d","Type":"ContainerStarted","Data":"ecab75dc9cd7bd70c1ab9df8ab84ea76f66b4003ed17c9e90bec058923db5a0b"} Jan 31 09:03:26 crc kubenswrapper[4732]: E0131 09:03:26.707998 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:27.207976327 +0000 UTC m=+145.513852591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.711019 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" event={"ID":"541ea3c2-891c-4c3e-81fd-9d340112c62b","Type":"ContainerStarted","Data":"e821c09a3ccb6a40c23e0f74bb9209a294da276e26d710a28decd86bc0d3c274"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.711987 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.755070 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vpnm5" event={"ID":"8cc29c02-baeb-4f46-92d6-684343509ae1","Type":"ContainerStarted","Data":"028c79adb4156659acd73bab76af06e2c4665808280d16be70d725ea649b82b3"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.758118 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" event={"ID":"639dacb9-2ea3-49d2-b5c4-996992c8e16a","Type":"ContainerStarted","Data":"fb874cab7660d02215feda5562a2f58f952b7ed4ab3d6f1a75632fb183acbe30"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.774165 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" event={"ID":"e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a","Type":"ContainerStarted","Data":"6eca15ab7357874e4edb05c74f358f8f33b6cebbec102a9396211388e37d5da1"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.782780 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h5q9f" event={"ID":"8814e7c8-5104-40f7-9761-4feedc15697b","Type":"ContainerStarted","Data":"810a7219fbe0602d94b66784880ef250eec1b53ec5dbf8b9e30ef55ffb36b889"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.787704 4732 patch_prober.go:28] interesting pod/downloads-7954f5f757-rt2jr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.788036 4732 patch_prober.go:28] interesting pod/console-operator-58897d9998-76d6v container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.788079 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-76d6v" podUID="499830ff-8add-4caf-b469-d1cbde569fb7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.788166 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rt2jr" podUID="81e1781e-a935-4f3f-b2aa-9a0807f43c73" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.787702 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" event={"ID":"219a04b6-e7bd-4138-bcc7-4f650537aa24","Type":"ContainerStarted","Data":"45d9723ae1f9063bad99b0cd5b3c8cb9365e80583327ef29b13ddfa585f2a481"} Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.788216 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.792645 4732 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tg4xc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.792704 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" podUID="219a04b6-e7bd-4138-bcc7-4f650537aa24" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.810199 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:26 crc kubenswrapper[4732]: E0131 09:03:26.811790 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:27.31176417 +0000 UTC m=+145.617640374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.898766 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-rt2jr" podStartSLOduration=119.898742207 podStartE2EDuration="1m59.898742207s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:26.898681465 +0000 UTC m=+145.204557669" watchObservedRunningTime="2026-01-31 09:03:26.898742207 +0000 UTC m=+145.204618411" Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.912643 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:26 crc kubenswrapper[4732]: E0131 09:03:26.916388 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:27.41636648 +0000 UTC m=+145.722242734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:26 crc kubenswrapper[4732]: I0131 09:03:26.936732 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq"] Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.001879 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.003828 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.003884 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.015453 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:27 crc kubenswrapper[4732]: E0131 09:03:27.016223 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:27.516199429 +0000 UTC m=+145.822075623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.123747 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:27 crc kubenswrapper[4732]: E0131 09:03:27.124205 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:27.624191643 +0000 UTC m=+145.930067847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.125235 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ljds4"] Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.128765 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wqf9f"] Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.165111 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-76d6v" podStartSLOduration=120.16508708 podStartE2EDuration="2m0.16508708s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:27.164962386 +0000 UTC m=+145.470838600" watchObservedRunningTime="2026-01-31 09:03:27.16508708 +0000 UTC m=+145.470963284" Jan 31 09:03:27 crc kubenswrapper[4732]: W0131 09:03:27.178406 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d02584d_db7d_4bc0_8cd9_33081993309b.slice/crio-26a7947af87c367b8b5fd8967811355f2031ef1361fc3f3dc4ec465f3ec37d25 WatchSource:0}: Error finding container 26a7947af87c367b8b5fd8967811355f2031ef1361fc3f3dc4ec465f3ec37d25: Status 404 returned error can't find the container with id 26a7947af87c367b8b5fd8967811355f2031ef1361fc3f3dc4ec465f3ec37d25 Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.187720 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.213355 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r9cz8" podStartSLOduration=120.213334353 podStartE2EDuration="2m0.213334353s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:27.213205849 +0000 UTC m=+145.519082043" watchObservedRunningTime="2026-01-31 09:03:27.213334353 +0000 UTC m=+145.519210557" Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.228518 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:27 crc kubenswrapper[4732]: E0131 09:03:27.228876 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:27.728856195 +0000 UTC m=+146.034732399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.278992 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7"] Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.300597 4732 csr.go:261] certificate signing request csr-jpx96 is approved, waiting to be issued Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.336300 4732 csr.go:257] certificate signing request csr-jpx96 is issued Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.338331 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:27 crc kubenswrapper[4732]: E0131 09:03:27.338626 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:27.838612819 +0000 UTC m=+146.144489023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.438952 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:27 crc kubenswrapper[4732]: E0131 09:03:27.439166 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:27.939127302 +0000 UTC m=+146.245003506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.439317 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:27 crc kubenswrapper[4732]: E0131 09:03:27.439850 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:27.939841966 +0000 UTC m=+146.245718170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4732]: W0131 09:03:27.481719 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4d0ed50_aa9b_4a62_b340_882ddf73f008.slice/crio-94900506e78a6c467e226df2c51bd0deeed7b70de0ce789b914b8672f3e90a32 WatchSource:0}: Error finding container 94900506e78a6c467e226df2c51bd0deeed7b70de0ce789b914b8672f3e90a32: Status 404 returned error can't find the container with id 94900506e78a6c467e226df2c51bd0deeed7b70de0ce789b914b8672f3e90a32 Jan 31 09:03:27 crc kubenswrapper[4732]: W0131 09:03:27.516748 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12b99fdc_6d61_46e1_b093_b1b92efce54c.slice/crio-d823b2f2259fe7403df9d95229c6812fa829c22821d540565c0441a8e495dd3f WatchSource:0}: Error finding container d823b2f2259fe7403df9d95229c6812fa829c22821d540565c0441a8e495dd3f: Status 404 returned error can't find the container with id d823b2f2259fe7403df9d95229c6812fa829c22821d540565c0441a8e495dd3f Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.541177 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:27 crc kubenswrapper[4732]: E0131 09:03:27.541623 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.041590649 +0000 UTC m=+146.347466853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.560217 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8t8ks" podStartSLOduration=120.560195806 podStartE2EDuration="2m0.560195806s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:27.558039693 +0000 UTC m=+145.863915907" watchObservedRunningTime="2026-01-31 09:03:27.560195806 +0000 UTC m=+145.866072010" Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.647128 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:27 crc kubenswrapper[4732]: E0131 09:03:27.657273 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.157250432 +0000 UTC m=+146.463126626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.758175 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc"] Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.769044 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:27 crc kubenswrapper[4732]: E0131 09:03:27.770335 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.269643874 +0000 UTC m=+146.575520078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.776680 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg"] Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.871416 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:27 crc kubenswrapper[4732]: E0131 09:03:27.871850 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.371835473 +0000 UTC m=+146.677711677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.878710 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vpnm5" event={"ID":"8cc29c02-baeb-4f46-92d6-684343509ae1","Type":"ContainerStarted","Data":"994f37a34067d24060075614a4201810c77b264e2851f8972bb66f0402c0b01b"} Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.897769 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pxn6w" podStartSLOduration=120.897751345 podStartE2EDuration="2m0.897751345s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:27.896052648 +0000 UTC m=+146.201928852" watchObservedRunningTime="2026-01-31 09:03:27.897751345 +0000 UTC m=+146.203627539" Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.917722 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg"] Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.917999 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq"] Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.931524 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" event={"ID":"12b99fdc-6d61-46e1-b093-b1b92efce54c","Type":"ContainerStarted","Data":"d823b2f2259fe7403df9d95229c6812fa829c22821d540565c0441a8e495dd3f"} Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.958712 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" event={"ID":"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20","Type":"ContainerStarted","Data":"9615c331134f8617d35092f6d2eb0dd4c5eead219bbc1b139774acc1bdb42b9b"} Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.959223 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.978409 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.978968 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh"] Jan 31 09:03:27 crc kubenswrapper[4732]: E0131 09:03:27.980520 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.480483479 +0000 UTC m=+146.786359693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:27 crc kubenswrapper[4732]: I0131 09:03:27.989322 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-c7j22"] Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.002420 4732 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c8t6l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.002489 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" podUID="c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.002637 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dmhxf" event={"ID":"d64c27a7-b418-450e-9067-dde0cd145597","Type":"ContainerStarted","Data":"1ff9dd2a96d576f89512e1b8be3651e306b1ff8b52addc0f38048e9e4c6976ed"} Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.013086 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:28 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:28 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:28 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.013149 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.024801 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n"] Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.040839 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-54nxd" podStartSLOduration=121.04082189 podStartE2EDuration="2m1.04082189s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:28.03935069 +0000 UTC m=+146.345226894" watchObservedRunningTime="2026-01-31 09:03:28.04082189 +0000 UTC m=+146.346698094" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.069513 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h5q9f" event={"ID":"8814e7c8-5104-40f7-9761-4feedc15697b","Type":"ContainerStarted","Data":"8e2e47330a0cff60d343fdebcddbd68f81d61614c46ac5de056954f059801465"} Jan 31 09:03:28 crc kubenswrapper[4732]: E0131 09:03:28.091258 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.591240176 +0000 UTC m=+146.897116380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.083525 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.113149 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" event={"ID":"1d02584d-db7d-4bc0-8cd9-33081993309b","Type":"ContainerStarted","Data":"26a7947af87c367b8b5fd8967811355f2031ef1361fc3f3dc4ec465f3ec37d25"} Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.131740 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" event={"ID":"f4d0ed50-aa9b-4a62-b340-882ddf73f008","Type":"ContainerStarted","Data":"94900506e78a6c467e226df2c51bd0deeed7b70de0ce789b914b8672f3e90a32"} Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.155724 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hn5wx"] Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.162902 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" podStartSLOduration=121.162876857 podStartE2EDuration="2m1.162876857s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:28.149633111 +0000 UTC m=+146.455509315" watchObservedRunningTime="2026-01-31 09:03:28.162876857 +0000 UTC m=+146.468753071" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.165339 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cx8cr"] Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.175924 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" event={"ID":"639dacb9-2ea3-49d2-b5c4-996992c8e16a","Type":"ContainerStarted","Data":"7d3480f4669b746b7e3f02ebc7e7cdd9677524e312fd46c6cd599eb58e036199"} Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.179305 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" event={"ID":"075f442e-a691-4856-a6ea-e21f1dcbcb20","Type":"ContainerStarted","Data":"93f93c2ee3a222337228cf7e2b94fe95283c70b8a31b8ac3338cb37ac5ae12b2"} Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.186095 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-f78bs" event={"ID":"eb80e0fc-6378-4d0d-a8b0-6c662073ed4d","Type":"ContainerStarted","Data":"1c48a0fa383a2c663b8e2a2ce2e4a20e47f5f248bbe6f7cd0bb0f7fbbd874e52"} Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.194345 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:28 crc kubenswrapper[4732]: E0131 09:03:28.195564 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.695534206 +0000 UTC m=+147.001410470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.203825 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" event={"ID":"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d","Type":"ContainerStarted","Data":"454505ef33a0f64d3a5a9bf57d498ac32d53fa4b6eeead1b0816b17db84943d1"} Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.234017 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-fsss9" podStartSLOduration=121.23399667 podStartE2EDuration="2m1.23399667s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:28.194638686 +0000 UTC m=+146.500514890" watchObservedRunningTime="2026-01-31 09:03:28.23399667 +0000 UTC m=+146.539872874" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.239905 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7"] Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.246147 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kgqfp" event={"ID":"2325b276-ee4d-438d-b9d6-d7de3024ba96","Type":"ContainerStarted","Data":"c4a0c81d27e168dda742e6b89de4c9bf7097f998145340484ac2a86979fbc824"} Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.265033 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" event={"ID":"058f5386-f340-4a52-bfc8-9b1a60515c9b","Type":"ContainerStarted","Data":"6fd5aabbe2ed81a7a78020aa8ca21834975dae45d93b7e0645ab08ab12f1815f"} Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.266101 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.267110 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vkrgj" podStartSLOduration=121.267092014 podStartE2EDuration="2m1.267092014s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:28.250839267 +0000 UTC m=+146.556715471" watchObservedRunningTime="2026-01-31 09:03:28.267092014 +0000 UTC m=+146.572968218" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.268504 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8"] Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.275009 4732 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7fqsl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.275051 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" podUID="058f5386-f340-4a52-bfc8-9b1a60515c9b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 31 09:03:28 crc kubenswrapper[4732]: W0131 09:03:28.283696 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0558933a_c8d6_45dc_aeaf_af86190b15a0.slice/crio-489c0c3b171c6d96e855fb1f2464c3d1486c21baf174589add9247f92fff3bcc WatchSource:0}: Error finding container 489c0c3b171c6d96e855fb1f2464c3d1486c21baf174589add9247f92fff3bcc: Status 404 returned error can't find the container with id 489c0c3b171c6d96e855fb1f2464c3d1486c21baf174589add9247f92fff3bcc Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.283963 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" event={"ID":"4149606a-3fbc-4da9-ba05-dc473b492a89","Type":"ContainerStarted","Data":"0d603fb2b88bc584de7719a4094d2d7f44ba3f412750957a8c4a44d9d9f4e109"} Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.294578 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" podStartSLOduration=121.294550048 podStartE2EDuration="2m1.294550048s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:28.291179694 +0000 UTC m=+146.597055888" watchObservedRunningTime="2026-01-31 09:03:28.294550048 +0000 UTC m=+146.600426252" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.296585 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:28 crc kubenswrapper[4732]: E0131 09:03:28.300376 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.800357783 +0000 UTC m=+147.106234057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.307128 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" event={"ID":"33169e52-3fee-462c-b341-46563ddbf5aa","Type":"ContainerStarted","Data":"03c6183f62903179b889f746896931dc6c474b57a0bc50364cf6b9de2e1aa3a8"} Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.318891 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.347355 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 08:58:27 +0000 UTC, rotation deadline is 2026-11-30 21:28:02.735812286 +0000 UTC Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.347402 4732 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7284h24m34.38841404s for next certificate rotation Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.370182 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h5q9f" podStartSLOduration=121.370157272 podStartE2EDuration="2m1.370157272s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:28.32789324 +0000 UTC m=+146.633769444" watchObservedRunningTime="2026-01-31 09:03:28.370157272 +0000 UTC m=+146.676033486" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.400073 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:28 crc kubenswrapper[4732]: E0131 09:03:28.409768 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:28.909726994 +0000 UTC m=+147.215603238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4732]: W0131 09:03:28.418372 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3302e69_0f73_4974_a8ac_af1992933147.slice/crio-f53f4c1e20390b55703a4f25af2363cf2ce342ac193b7fd1994de4cad5ba828e WatchSource:0}: Error finding container f53f4c1e20390b55703a4f25af2363cf2ce342ac193b7fd1994de4cad5ba828e: Status 404 returned error can't find the container with id f53f4c1e20390b55703a4f25af2363cf2ce342ac193b7fd1994de4cad5ba828e Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.422318 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" podStartSLOduration=121.422296007 podStartE2EDuration="2m1.422296007s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:28.420985483 +0000 UTC m=+146.726861697" watchObservedRunningTime="2026-01-31 09:03:28.422296007 +0000 UTC m=+146.728172211" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.422823 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sf8wd" podStartSLOduration=121.422816694 podStartE2EDuration="2m1.422816694s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:28.371073153 +0000 UTC m=+146.676949357" watchObservedRunningTime="2026-01-31 09:03:28.422816694 +0000 UTC m=+146.728692908" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.490483 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" podStartSLOduration=121.49045215 podStartE2EDuration="2m1.49045215s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:28.482442531 +0000 UTC m=+146.788318735" watchObservedRunningTime="2026-01-31 09:03:28.49045215 +0000 UTC m=+146.796328354" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.502918 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:28 crc kubenswrapper[4732]: E0131 09:03:28.514118 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.014093356 +0000 UTC m=+147.319969560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.544905 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" podStartSLOduration=121.544880632 podStartE2EDuration="2m1.544880632s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:28.517127408 +0000 UTC m=+146.823003612" watchObservedRunningTime="2026-01-31 09:03:28.544880632 +0000 UTC m=+146.850756836" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.611319 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:28 crc kubenswrapper[4732]: E0131 09:03:28.611925 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.111900548 +0000 UTC m=+147.417776752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.612470 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.618051 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" podStartSLOduration=121.618025114 podStartE2EDuration="2m1.618025114s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:28.571479517 +0000 UTC m=+146.877355721" watchObservedRunningTime="2026-01-31 09:03:28.618025114 +0000 UTC m=+146.923901328" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.628177 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" podStartSLOduration=121.628157804 podStartE2EDuration="2m1.628157804s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:28.627536303 +0000 UTC m=+146.933412527" watchObservedRunningTime="2026-01-31 09:03:28.628157804 +0000 UTC m=+146.934034008" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.716011 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:28 crc kubenswrapper[4732]: E0131 09:03:28.716618 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.216601791 +0000 UTC m=+147.522478005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.744492 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" podStartSLOduration=121.744471029 podStartE2EDuration="2m1.744471029s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:28.695294894 +0000 UTC m=+147.001171108" watchObservedRunningTime="2026-01-31 09:03:28.744471029 +0000 UTC m=+147.050347243" Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.819217 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:28 crc kubenswrapper[4732]: E0131 09:03:28.819701 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.319648659 +0000 UTC m=+147.625524863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:28 crc kubenswrapper[4732]: I0131 09:03:28.920517 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:28 crc kubenswrapper[4732]: E0131 09:03:28.923616 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.423593447 +0000 UTC m=+147.729469651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.000370 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:29 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:29 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:29 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.000451 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.024294 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.024781 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.52475605 +0000 UTC m=+147.830632264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.126506 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.126980 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.62696257 +0000 UTC m=+147.932838764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.227384 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.227651 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.727609037 +0000 UTC m=+148.033485241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.227769 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.228139 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.728132085 +0000 UTC m=+148.034008279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.311912 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" event={"ID":"6377a401-b10b-455a-8906-f6706302b91f","Type":"ContainerStarted","Data":"d2181f4f5d1e2146e81d69f051730329e27e7743031a7694aa2c8da14c5cc18e"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.312789 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc" event={"ID":"498d64fc-0d0f-43c6-aaae-bd3c5f0d7873","Type":"ContainerStarted","Data":"5a17f505d8d7fe1e77ceea12d0e357ef7e3a6e284431fc8a4213a4bb74cfe86b"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.313610 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" event={"ID":"6a680afa-dc56-4bf8-808c-b1c947c8fbf0","Type":"ContainerStarted","Data":"a77c8864885f592214013bdaabbf1546a4bf3b9d551e666ca93aa4b7dc47c77e"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.315072 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" event={"ID":"12b99fdc-6d61-46e1-b093-b1b92efce54c","Type":"ContainerStarted","Data":"70129bb72d9d428fb9c6b6a67de54e5e851c89d46ebba9123286e0fb055d052f"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.316358 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" event={"ID":"1d02584d-db7d-4bc0-8cd9-33081993309b","Type":"ContainerStarted","Data":"cb75563427d906c4558da5925381465530a10f0cf7883273950e99d8bbdb381e"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.316596 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.317809 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" event={"ID":"e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54","Type":"ContainerStarted","Data":"6e978d92928544f375cf8cf096f96403f6dc278ebb6db452f3cb460900306e85"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.318951 4732 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-wb8wq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.319049 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" podUID="1d02584d-db7d-4bc0-8cd9-33081993309b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.319636 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" event={"ID":"576b5a44-3c4c-4905-8d89-caed3b1eb43f","Type":"ContainerStarted","Data":"1204df07b0546c41d29b3218ad1ab3082ea47963a61ee256e5d242fba34c34cd"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.319851 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.321158 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dmhxf" event={"ID":"d64c27a7-b418-450e-9067-dde0cd145597","Type":"ContainerStarted","Data":"b505ca3ae676829a4263a0294c61e3ce507d8d59a73dcb7b8123dc58c64a5387"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.322405 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" event={"ID":"7c0ada8b-e2dc-418c-a43e-33789285388f","Type":"ContainerStarted","Data":"7cddae12a36276f7679a3410a2b60becc3d653fa6d431d86451ada4cde11b528"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.323177 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" event={"ID":"a3302e69-0f73-4974-a8ac-af1992933147","Type":"ContainerStarted","Data":"f53f4c1e20390b55703a4f25af2363cf2ce342ac193b7fd1994de4cad5ba828e"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.323938 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" event={"ID":"ffad13f5-fb20-46ac-b886-c7e5a29b6599","Type":"ContainerStarted","Data":"507a7d36ca763a7df7bff24d356848728d6fe563b1f52cd5ce5ddd06de33b76f"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.325052 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wqf9f" event={"ID":"4149606a-3fbc-4da9-ba05-dc473b492a89","Type":"ContainerStarted","Data":"cf5b5693469538faad28bf091a05f257185982cfc29214364d426b1d986730fa"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.327332 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xprfh" event={"ID":"81b523ca-b564-45d4-bad5-f7e236f2e6d0","Type":"ContainerStarted","Data":"b7972d03a35b0da756ebcf27d3b3d54065a50eb2f5fcb04291c30cf3451e300a"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.328521 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.328647 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.828623587 +0000 UTC m=+148.134499791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.328788 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.329113 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.829104383 +0000 UTC m=+148.134980587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.330156 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vpnm5" event={"ID":"8cc29c02-baeb-4f46-92d6-684343509ae1","Type":"ContainerStarted","Data":"9a3719522a375ad4ac073e52e792f10112cb06992301dca226a8ef16512b0dc2"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.333812 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bzk95" event={"ID":"33169e52-3fee-462c-b341-46563ddbf5aa","Type":"ContainerStarted","Data":"e1e4c6691134cf8196fe8b794283b617486b1dc7ccf0e5b6545094033c51a1b1"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.335272 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" event={"ID":"f4d0ed50-aa9b-4a62-b340-882ddf73f008","Type":"ContainerStarted","Data":"393baf9619f9cf20095d6f558e1acd40e97479a63b2c71e28acd340a9b8c0183"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.335559 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.337308 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-44wvz" event={"ID":"64e4fed2-f31d-4d3f-ad77-d55fdddacb4d","Type":"ContainerStarted","Data":"21db909e84f6fe0c354f0ddda90c23f8e0f3355d8c0cb038af1a5dfc0d26905f"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.337776 4732 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ljds4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.337819 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" podUID="f4d0ed50-aa9b-4a62-b340-882ddf73f008" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.338471 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hn5wx" event={"ID":"1e6408b2-d73d-4fc3-86f7-d29ea59ad32e","Type":"ContainerStarted","Data":"6ebf52ace6b30eaa61e0b5dbea6b27dd704ea4b0e58a9dcb3b1bc45908bfc7e8"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.339457 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" event={"ID":"0558933a-c8d6-45dc-aeaf-af86190b15a0","Type":"ContainerStarted","Data":"489c0c3b171c6d96e855fb1f2464c3d1486c21baf174589add9247f92fff3bcc"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.340639 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kgqfp" event={"ID":"2325b276-ee4d-438d-b9d6-d7de3024ba96","Type":"ContainerStarted","Data":"2d86d760940b52671a07e286595f0df253a255586fad28ce45ad4e3a40ab2321"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.342488 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zcd2h" event={"ID":"075f442e-a691-4856-a6ea-e21f1dcbcb20","Type":"ContainerStarted","Data":"78a1039648041eacf8bd25e47005ea59bc47f6eabe7164523054ff8e894db943"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.345513 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" event={"ID":"caaa7607-b47d-43ca-adff-f9135baf7262","Type":"ContainerStarted","Data":"bcb59bdab6a65b50e8af721cc78a2babde93820c9d46e0d1ce242835778998fe"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.346371 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx8cr" event={"ID":"56a9a21e-28d2-4386-9fe0-947c3a39ab6a","Type":"ContainerStarted","Data":"141c720196fa8b0005af28800d8defeba6bcedcff8b7e295be2ec049640f0f68"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.347743 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" event={"ID":"058f5386-f340-4a52-bfc8-9b1a60515c9b","Type":"ContainerStarted","Data":"24bbf9f597251423574db6ae22dfec4ef72223af8e75901547c2b1b66b0d7aa1"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.348316 4732 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7fqsl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.348370 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" podUID="058f5386-f340-4a52-bfc8-9b1a60515c9b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.348794 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c7j22" event={"ID":"89cec875-cd1f-4867-8b4b-ca72c57c974b","Type":"ContainerStarted","Data":"dd02a6b4499666b4dd59125116e69274dc420104128afb9d750cd6a10e206e56"} Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.355803 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xgh7" podStartSLOduration=122.35578203 podStartE2EDuration="2m2.35578203s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:29.352696226 +0000 UTC m=+147.658572430" watchObservedRunningTime="2026-01-31 09:03:29.35578203 +0000 UTC m=+147.661658234" Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.373435 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" podStartSLOduration=122.373413694 podStartE2EDuration="2m2.373413694s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:29.371031114 +0000 UTC m=+147.676907318" watchObservedRunningTime="2026-01-31 09:03:29.373413694 +0000 UTC m=+147.679289898" Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.385078 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-dmhxf" podStartSLOduration=6.385057776 podStartE2EDuration="6.385057776s" podCreationTimestamp="2026-01-31 09:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:29.383533244 +0000 UTC m=+147.689409448" watchObservedRunningTime="2026-01-31 09:03:29.385057776 +0000 UTC m=+147.690933980" Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.400448 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" podStartSLOduration=122.400422342 podStartE2EDuration="2m2.400422342s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:29.400371471 +0000 UTC m=+147.706247675" watchObservedRunningTime="2026-01-31 09:03:29.400422342 +0000 UTC m=+147.706298566" Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.420906 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-vpnm5" podStartSLOduration=122.420887391 podStartE2EDuration="2m2.420887391s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:29.418306025 +0000 UTC m=+147.724182229" watchObservedRunningTime="2026-01-31 09:03:29.420887391 +0000 UTC m=+147.726763595" Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.429438 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.431048 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:29.931014082 +0000 UTC m=+148.236890286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.447547 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" podStartSLOduration=122.447519547 podStartE2EDuration="2m2.447519547s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:29.437703837 +0000 UTC m=+147.743580041" watchObservedRunningTime="2026-01-31 09:03:29.447519547 +0000 UTC m=+147.753395751" Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.532248 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.532763 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.032737855 +0000 UTC m=+148.338614099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.634311 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.634438 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.134406637 +0000 UTC m=+148.440282841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.634592 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.634997 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.134987236 +0000 UTC m=+148.440863440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.736030 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.736179 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.23615187 +0000 UTC m=+148.542028084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.736378 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.736732 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.23672321 +0000 UTC m=+148.542599414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.837732 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.837976 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.337924505 +0000 UTC m=+148.643800729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.838045 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.838449 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.338432213 +0000 UTC m=+148.644308477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:29 crc kubenswrapper[4732]: I0131 09:03:29.939840 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:29 crc kubenswrapper[4732]: E0131 09:03:29.940294 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.440267349 +0000 UTC m=+148.746143553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.001707 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:30 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:30 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:30 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.002137 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.041460 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.042042 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.542023974 +0000 UTC m=+148.847900178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.045436 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.143294 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.143567 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.64354427 +0000 UTC m=+148.949420474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.143914 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.144483 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.644466402 +0000 UTC m=+148.950342616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.245408 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.245693 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.745655336 +0000 UTC m=+149.051531540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.246080 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.246620 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.746605408 +0000 UTC m=+149.052481612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.273060 4732 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hzs92 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.273133 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" podUID="576b5a44-3c4c-4905-8d89-caed3b1eb43f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.273459 4732 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hzs92 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.273486 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" podUID="576b5a44-3c4c-4905-8d89-caed3b1eb43f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.347677 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.347921 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.847880117 +0000 UTC m=+149.153756321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.348018 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.348456 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.848448486 +0000 UTC m=+149.154324690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.355349 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" event={"ID":"0558933a-c8d6-45dc-aeaf-af86190b15a0","Type":"ContainerStarted","Data":"940d163c1a1a494d9850589a935618158e47c219b1ef2186264ecbca1a2bfccc"} Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.356713 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" event={"ID":"a3302e69-0f73-4974-a8ac-af1992933147","Type":"ContainerStarted","Data":"ea4607fbdadbd1827d030b75908c08b9ba649762448baddf7ff8d2ecb248d499"} Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.358262 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-f78bs" event={"ID":"eb80e0fc-6378-4d0d-a8b0-6c662073ed4d","Type":"ContainerStarted","Data":"bdbc34129773068cda5d67136cc28839379cfeca6c7947eca1f9de790cf604f0"} Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.359859 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc" event={"ID":"498d64fc-0d0f-43c6-aaae-bd3c5f0d7873","Type":"ContainerStarted","Data":"23414bc19e006fd97e8b4fd01854cd2b5641c26df1de7e8f19251db919576ffa"} Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.361505 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" event={"ID":"6a680afa-dc56-4bf8-808c-b1c947c8fbf0","Type":"ContainerStarted","Data":"a3862c80161167518eda88d0fd349597505cb098b09e89ba3305d1bc84ea0ed4"} Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.365330 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" event={"ID":"caaa7607-b47d-43ca-adff-f9135baf7262","Type":"ContainerStarted","Data":"aa489e5b1d7de97707c97d8c45409de9509330430d456899f2d0fe83a1a6bb34"} Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.368508 4732 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-wb8wq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.368548 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" podUID="1d02584d-db7d-4bc0-8cd9-33081993309b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.368619 4732 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7fqsl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.368727 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" podUID="058f5386-f340-4a52-bfc8-9b1a60515c9b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.369035 4732 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ljds4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.369160 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" podUID="f4d0ed50-aa9b-4a62-b340-882ddf73f008" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.369189 4732 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hzs92 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.369307 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" podUID="576b5a44-3c4c-4905-8d89-caed3b1eb43f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.390275 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" podStartSLOduration=123.390248292 podStartE2EDuration="2m3.390248292s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:30.387978626 +0000 UTC m=+148.693854840" watchObservedRunningTime="2026-01-31 09:03:30.390248292 +0000 UTC m=+148.696124496" Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.449455 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.449882 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.949834527 +0000 UTC m=+149.255710731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.450560 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.453360 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:30.953340965 +0000 UTC m=+149.259217169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.551476 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.551833 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.051801629 +0000 UTC m=+149.357677833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.552031 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.552381 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.052369498 +0000 UTC m=+149.358245752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.653012 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.653186 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.15315893 +0000 UTC m=+149.459035144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.653392 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.653779 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.15376843 +0000 UTC m=+149.459644634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.754227 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.754586 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.254545062 +0000 UTC m=+149.560421266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.755925 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.756323 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.256304991 +0000 UTC m=+149.562181195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.856949 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.857257 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.357218687 +0000 UTC m=+149.663094891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.857615 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.858145 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.358134098 +0000 UTC m=+149.664010382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:30 crc kubenswrapper[4732]: I0131 09:03:30.959107 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:30 crc kubenswrapper[4732]: E0131 09:03:30.959501 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.459473277 +0000 UTC m=+149.765349481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.008086 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:31 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:31 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:31 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.008141 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.060833 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:31 crc kubenswrapper[4732]: E0131 09:03:31.061315 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.561289614 +0000 UTC m=+149.867165878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.162500 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:31 crc kubenswrapper[4732]: E0131 09:03:31.162709 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.662678186 +0000 UTC m=+149.968554390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.163113 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:31 crc kubenswrapper[4732]: E0131 09:03:31.163491 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.663474263 +0000 UTC m=+149.969350467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.264492 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:31 crc kubenswrapper[4732]: E0131 09:03:31.264973 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.764940737 +0000 UTC m=+150.070816941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.365707 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:31 crc kubenswrapper[4732]: E0131 09:03:31.366066 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.86604972 +0000 UTC m=+150.171925924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.371443 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" event={"ID":"a3302e69-0f73-4974-a8ac-af1992933147","Type":"ContainerStarted","Data":"cc667e0b4ea4ba9099055746ce1ef40fa9168b20ed76dbc45ac033af2550051e"} Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.373321 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx8cr" event={"ID":"56a9a21e-28d2-4386-9fe0-947c3a39ab6a","Type":"ContainerStarted","Data":"0506ed446713bba7d913d7d72f36bde9335138ec24ae6b3717bdf39d20acf732"} Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.373357 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx8cr" event={"ID":"56a9a21e-28d2-4386-9fe0-947c3a39ab6a","Type":"ContainerStarted","Data":"f00ffe2f6acf44cc4cbf8681079007ae91a44bb594caae37cfb5570d3245b593"} Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.387057 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" event={"ID":"e27a7ff4-a3a6-4654-99f7-6ec9f49c7c54","Type":"ContainerStarted","Data":"581a575d235456c51c60b0db035de8da48e1e100b41c01fa2984b0556d283d08"} Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.389158 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" event={"ID":"ffad13f5-fb20-46ac-b886-c7e5a29b6599","Type":"ContainerStarted","Data":"eb562913e37b3353a62315a1c03b58ee0a0a272fd3c3c7d9ebef244cd90fec26"} Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.396404 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xprfh" event={"ID":"81b523ca-b564-45d4-bad5-f7e236f2e6d0","Type":"ContainerStarted","Data":"dcf51ba99369929b60b832aeb0040a500a30af59a5aa23dcaf0a87e6cdbdcdfe"} Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.407365 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" event={"ID":"caaa7607-b47d-43ca-adff-f9135baf7262","Type":"ContainerStarted","Data":"6ce469881e2193c514b734acd51a7186ea1f7ba0f20a682ec4474843899665da"} Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.408281 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.412653 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" event={"ID":"7c0ada8b-e2dc-418c-a43e-33789285388f","Type":"ContainerStarted","Data":"65fa56cb474e6e8c9ba944f7c61b1f51bc1fb2db8f8aee8f15e97de794c94fe3"} Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.412712 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" event={"ID":"7c0ada8b-e2dc-418c-a43e-33789285388f","Type":"ContainerStarted","Data":"c9d1fc71e4bc47fd6bad285937064829789d26f89fdac6334266068e64e95ab7"} Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.416191 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hn5wx" event={"ID":"1e6408b2-d73d-4fc3-86f7-d29ea59ad32e","Type":"ContainerStarted","Data":"c831ebe45cc7b88a8df1d753121060ac99849b8dbf30699862e6b9585af6a694"} Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.423369 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-llpv8" podStartSLOduration=124.423345018 podStartE2EDuration="2m4.423345018s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.402848738 +0000 UTC m=+149.708724972" watchObservedRunningTime="2026-01-31 09:03:31.423345018 +0000 UTC m=+149.729221222" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.429451 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kgqfp" event={"ID":"2325b276-ee4d-438d-b9d6-d7de3024ba96","Type":"ContainerStarted","Data":"8092decd876b3c15cc891d582d26dbb6da04516ad5cea486a1dfa3f8d178a80a"} Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.429500 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.430594 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-f78bs" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.438971 4732 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6fwnh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.439080 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" podUID="6a680afa-dc56-4bf8-808c-b1c947c8fbf0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.443103 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4qmgq" podStartSLOduration=124.443083712 podStartE2EDuration="2m4.443083712s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.420935667 +0000 UTC m=+149.726811881" watchObservedRunningTime="2026-01-31 09:03:31.443083712 +0000 UTC m=+149.748959916" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.443952 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bpncg" podStartSLOduration=124.443947091 podStartE2EDuration="2m4.443947091s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.441631443 +0000 UTC m=+149.747507647" watchObservedRunningTime="2026-01-31 09:03:31.443947091 +0000 UTC m=+149.749823295" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.467526 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.467740 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.467984 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:31 crc kubenswrapper[4732]: E0131 09:03:31.468902 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:31.96887904 +0000 UTC m=+150.274755244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.476817 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.483301 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xprfh" podStartSLOduration=124.483278474 podStartE2EDuration="2m4.483278474s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.482089335 +0000 UTC m=+149.787965549" watchObservedRunningTime="2026-01-31 09:03:31.483278474 +0000 UTC m=+149.789154678" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.484309 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.503622 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" podStartSLOduration=124.503600488 podStartE2EDuration="2m4.503600488s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.500195994 +0000 UTC m=+149.806072198" watchObservedRunningTime="2026-01-31 09:03:31.503600488 +0000 UTC m=+149.809476693" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.519966 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" podStartSLOduration=124.519904567 podStartE2EDuration="2m4.519904567s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.519011108 +0000 UTC m=+149.824887312" watchObservedRunningTime="2026-01-31 09:03:31.519904567 +0000 UTC m=+149.825780771" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.536077 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-f78bs" podStartSLOduration=9.53603972 podStartE2EDuration="9.53603972s" podCreationTimestamp="2026-01-31 09:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.534807519 +0000 UTC m=+149.840683733" watchObservedRunningTime="2026-01-31 09:03:31.53603972 +0000 UTC m=+149.841915924" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.553924 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" podStartSLOduration=124.553901591 podStartE2EDuration="2m4.553901591s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.550761885 +0000 UTC m=+149.856638089" watchObservedRunningTime="2026-01-31 09:03:31.553901591 +0000 UTC m=+149.859777795" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.570532 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.570940 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:31 crc kubenswrapper[4732]: E0131 09:03:31.570979 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.070957745 +0000 UTC m=+150.376834039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.571090 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.579222 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.573635 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v69mc" podStartSLOduration=124.573616475 podStartE2EDuration="2m4.573616475s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.569043791 +0000 UTC m=+149.874919995" watchObservedRunningTime="2026-01-31 09:03:31.573616475 +0000 UTC m=+149.879492689" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.581491 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.584705 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4sk2n" podStartSLOduration=124.584681377 podStartE2EDuration="2m4.584681377s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.58358842 +0000 UTC m=+149.889464624" watchObservedRunningTime="2026-01-31 09:03:31.584681377 +0000 UTC m=+149.890557581" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.614496 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hn5wx" podStartSLOduration=9.614472219 podStartE2EDuration="9.614472219s" podCreationTimestamp="2026-01-31 09:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.610138694 +0000 UTC m=+149.916014908" watchObservedRunningTime="2026-01-31 09:03:31.614472219 +0000 UTC m=+149.920348423" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.626974 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kgqfp" podStartSLOduration=124.62695322 podStartE2EDuration="2m4.62695322s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:31.625114968 +0000 UTC m=+149.930991182" watchObservedRunningTime="2026-01-31 09:03:31.62695322 +0000 UTC m=+149.932829424" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.657635 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.666646 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.672339 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:31 crc kubenswrapper[4732]: E0131 09:03:31.672492 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.172465861 +0000 UTC m=+150.478342075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.672741 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:31 crc kubenswrapper[4732]: E0131 09:03:31.673124 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.173112542 +0000 UTC m=+150.478988756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.674153 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.773633 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:31 crc kubenswrapper[4732]: E0131 09:03:31.773879 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.273841432 +0000 UTC m=+150.579717636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.774074 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:31 crc kubenswrapper[4732]: E0131 09:03:31.774539 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.274519065 +0000 UTC m=+150.580395269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.877301 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:31 crc kubenswrapper[4732]: E0131 09:03:31.878089 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.37806379 +0000 UTC m=+150.683939994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:31 crc kubenswrapper[4732]: I0131 09:03:31.979822 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:31 crc kubenswrapper[4732]: E0131 09:03:31.980466 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.480449706 +0000 UTC m=+150.786325910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.014309 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:32 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:32 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:32 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.014380 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.088540 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:32 crc kubenswrapper[4732]: E0131 09:03:32.088945 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.588920416 +0000 UTC m=+150.894796620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.197074 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:32 crc kubenswrapper[4732]: E0131 09:03:32.197873 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.697855371 +0000 UTC m=+151.003731585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.298334 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:32 crc kubenswrapper[4732]: E0131 09:03:32.298686 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.798638293 +0000 UTC m=+151.104514507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.298779 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:32 crc kubenswrapper[4732]: E0131 09:03:32.299204 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.799169441 +0000 UTC m=+151.105045645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.404220 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:32 crc kubenswrapper[4732]: E0131 09:03:32.404583 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:32.904561717 +0000 UTC m=+151.210437921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.452795 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d949875459e61dc3bb5371015bd0663088273aba212961ecb5389c7352f3c11e"} Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.466775 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c41d31339cf675ad3aff0cfdf91e7b7ab5f11341b830ded1e17edefbb99bb282"} Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.478471 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8c471013c02a116da2b4102b61c988c12b805b889021a45c3315cdb960278066"} Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.507479 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:32 crc kubenswrapper[4732]: E0131 09:03:32.507931 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.007916585 +0000 UTC m=+151.313792789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.508477 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx8cr" podStartSLOduration=125.508457314 podStartE2EDuration="2m5.508457314s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:32.508399182 +0000 UTC m=+150.814275376" watchObservedRunningTime="2026-01-31 09:03:32.508457314 +0000 UTC m=+150.814333518" Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.610641 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:32 crc kubenswrapper[4732]: E0131 09:03:32.612645 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.112615529 +0000 UTC m=+151.418491733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.714489 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:32 crc kubenswrapper[4732]: E0131 09:03:32.714897 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.21487584 +0000 UTC m=+151.520752044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.816247 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:32 crc kubenswrapper[4732]: E0131 09:03:32.816834 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.316812201 +0000 UTC m=+151.622688405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:32 crc kubenswrapper[4732]: I0131 09:03:32.917913 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:32 crc kubenswrapper[4732]: E0131 09:03:32.918279 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.418264365 +0000 UTC m=+151.724140569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.005088 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:33 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:33 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:33 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.005162 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.018867 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:33 crc kubenswrapper[4732]: E0131 09:03:33.019089 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.519056717 +0000 UTC m=+151.824932921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.019249 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:33 crc kubenswrapper[4732]: E0131 09:03:33.019648 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.519641386 +0000 UTC m=+151.825517590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.120484 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:33 crc kubenswrapper[4732]: E0131 09:03:33.120701 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.620655185 +0000 UTC m=+151.926531389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.120761 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:33 crc kubenswrapper[4732]: E0131 09:03:33.121180 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.621165392 +0000 UTC m=+151.927041596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.222106 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:33 crc kubenswrapper[4732]: E0131 09:03:33.222448 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.72242457 +0000 UTC m=+152.028300774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.278191 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hzs92" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.323877 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:33 crc kubenswrapper[4732]: E0131 09:03:33.324409 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.824389262 +0000 UTC m=+152.130265466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.425242 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:33 crc kubenswrapper[4732]: E0131 09:03:33.425848 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:33.925809124 +0000 UTC m=+152.231685318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.478512 4732 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6fwnh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.478570 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" podUID="6a680afa-dc56-4bf8-808c-b1c947c8fbf0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.481377 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d320b4056ba1563d75c6d5c0e04bdd39555e92aeb617f54de39bbdd5d6bc1ef8"} Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.495125 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c7j22" event={"ID":"89cec875-cd1f-4867-8b4b-ca72c57c974b","Type":"ContainerStarted","Data":"ca100dc4a73e3df4d5093c77bd983ccb01bb25a5eaedf9c9761afdf3262b532b"} Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.497041 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6206971b4a2adbef5050e4400a61cd57951cc177f5bac49cdaadc5bec2d4686c"} Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.497687 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.499811 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"42e8dee280c1630c0745f649fd90440f36f7653aa5fb40ffe4868220bf561421"} Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.529699 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rtg8l"] Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.531312 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.532830 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:33 crc kubenswrapper[4732]: E0131 09:03:33.533314 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.033296101 +0000 UTC m=+152.339172305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.537022 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.552445 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rtg8l"] Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.634250 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.634649 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654wm\" (UniqueName: \"kubernetes.io/projected/320c2656-6f30-4922-835e-8c27a82800b1-kube-api-access-654wm\") pod \"community-operators-rtg8l\" (UID: \"320c2656-6f30-4922-835e-8c27a82800b1\") " pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.634711 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320c2656-6f30-4922-835e-8c27a82800b1-catalog-content\") pod \"community-operators-rtg8l\" (UID: \"320c2656-6f30-4922-835e-8c27a82800b1\") " pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.634752 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320c2656-6f30-4922-835e-8c27a82800b1-utilities\") pod \"community-operators-rtg8l\" (UID: \"320c2656-6f30-4922-835e-8c27a82800b1\") " pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:03:33 crc kubenswrapper[4732]: E0131 09:03:33.635714 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.135689537 +0000 UTC m=+152.441565741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.705783 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gb54f"] Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.707046 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.711327 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.719167 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gb54f"] Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.736430 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.736508 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-654wm\" (UniqueName: \"kubernetes.io/projected/320c2656-6f30-4922-835e-8c27a82800b1-kube-api-access-654wm\") pod \"community-operators-rtg8l\" (UID: \"320c2656-6f30-4922-835e-8c27a82800b1\") " pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.736542 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320c2656-6f30-4922-835e-8c27a82800b1-catalog-content\") pod \"community-operators-rtg8l\" (UID: \"320c2656-6f30-4922-835e-8c27a82800b1\") " pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.736576 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320c2656-6f30-4922-835e-8c27a82800b1-utilities\") pod \"community-operators-rtg8l\" (UID: \"320c2656-6f30-4922-835e-8c27a82800b1\") " pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.737086 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320c2656-6f30-4922-835e-8c27a82800b1-utilities\") pod \"community-operators-rtg8l\" (UID: \"320c2656-6f30-4922-835e-8c27a82800b1\") " pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:03:33 crc kubenswrapper[4732]: E0131 09:03:33.737420 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.23740426 +0000 UTC m=+152.543280464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.738188 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320c2656-6f30-4922-835e-8c27a82800b1-catalog-content\") pod \"community-operators-rtg8l\" (UID: \"320c2656-6f30-4922-835e-8c27a82800b1\") " pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.777750 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-654wm\" (UniqueName: \"kubernetes.io/projected/320c2656-6f30-4922-835e-8c27a82800b1-kube-api-access-654wm\") pod \"community-operators-rtg8l\" (UID: \"320c2656-6f30-4922-835e-8c27a82800b1\") " pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.837565 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:33 crc kubenswrapper[4732]: E0131 09:03:33.837848 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.337799098 +0000 UTC m=+152.643675302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.838108 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27rqc\" (UniqueName: \"kubernetes.io/projected/111ca852-fddd-4fb1-8d5d-331fd5921a71-kube-api-access-27rqc\") pod \"certified-operators-gb54f\" (UID: \"111ca852-fddd-4fb1-8d5d-331fd5921a71\") " pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.838201 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/111ca852-fddd-4fb1-8d5d-331fd5921a71-utilities\") pod \"certified-operators-gb54f\" (UID: \"111ca852-fddd-4fb1-8d5d-331fd5921a71\") " pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.838278 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.838310 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/111ca852-fddd-4fb1-8d5d-331fd5921a71-catalog-content\") pod \"certified-operators-gb54f\" (UID: \"111ca852-fddd-4fb1-8d5d-331fd5921a71\") " pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:03:33 crc kubenswrapper[4732]: E0131 09:03:33.838647 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.338639867 +0000 UTC m=+152.644516071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.870923 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.920974 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zflvq"] Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.921964 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.933034 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zflvq"] Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.938761 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.939036 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/111ca852-fddd-4fb1-8d5d-331fd5921a71-utilities\") pod \"certified-operators-gb54f\" (UID: \"111ca852-fddd-4fb1-8d5d-331fd5921a71\") " pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.939104 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/111ca852-fddd-4fb1-8d5d-331fd5921a71-catalog-content\") pod \"certified-operators-gb54f\" (UID: \"111ca852-fddd-4fb1-8d5d-331fd5921a71\") " pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.939144 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27rqc\" (UniqueName: \"kubernetes.io/projected/111ca852-fddd-4fb1-8d5d-331fd5921a71-kube-api-access-27rqc\") pod \"certified-operators-gb54f\" (UID: \"111ca852-fddd-4fb1-8d5d-331fd5921a71\") " pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:03:33 crc kubenswrapper[4732]: E0131 09:03:33.939555 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.439535442 +0000 UTC m=+152.745411646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.940208 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/111ca852-fddd-4fb1-8d5d-331fd5921a71-utilities\") pod \"certified-operators-gb54f\" (UID: \"111ca852-fddd-4fb1-8d5d-331fd5921a71\") " pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.940520 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/111ca852-fddd-4fb1-8d5d-331fd5921a71-catalog-content\") pod \"certified-operators-gb54f\" (UID: \"111ca852-fddd-4fb1-8d5d-331fd5921a71\") " pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.958429 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27rqc\" (UniqueName: \"kubernetes.io/projected/111ca852-fddd-4fb1-8d5d-331fd5921a71-kube-api-access-27rqc\") pod \"certified-operators-gb54f\" (UID: \"111ca852-fddd-4fb1-8d5d-331fd5921a71\") " pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:03:33 crc kubenswrapper[4732]: I0131 09:03:33.990504 4732 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.001163 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:34 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:34 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:34 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.001249 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.022574 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.040562 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317b5076-0f62-45e5-9db0-8d03103c990e-catalog-content\") pod \"community-operators-zflvq\" (UID: \"317b5076-0f62-45e5-9db0-8d03103c990e\") " pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.040629 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p95zf\" (UniqueName: \"kubernetes.io/projected/317b5076-0f62-45e5-9db0-8d03103c990e-kube-api-access-p95zf\") pod \"community-operators-zflvq\" (UID: \"317b5076-0f62-45e5-9db0-8d03103c990e\") " pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.040657 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.040702 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317b5076-0f62-45e5-9db0-8d03103c990e-utilities\") pod \"community-operators-zflvq\" (UID: \"317b5076-0f62-45e5-9db0-8d03103c990e\") " pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:03:34 crc kubenswrapper[4732]: E0131 09:03:34.041048 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.541032988 +0000 UTC m=+152.846909192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.108164 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6z6vm"] Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.110044 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.127281 4732 patch_prober.go:28] interesting pod/downloads-7954f5f757-rt2jr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.127344 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rt2jr" podUID="81e1781e-a935-4f3f-b2aa-9a0807f43c73" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.127536 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6z6vm"] Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.128210 4732 patch_prober.go:28] interesting pod/downloads-7954f5f757-rt2jr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.128233 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rt2jr" podUID="81e1781e-a935-4f3f-b2aa-9a0807f43c73" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.152120 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.152301 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p95zf\" (UniqueName: \"kubernetes.io/projected/317b5076-0f62-45e5-9db0-8d03103c990e-kube-api-access-p95zf\") pod \"community-operators-zflvq\" (UID: \"317b5076-0f62-45e5-9db0-8d03103c990e\") " pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.152368 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317b5076-0f62-45e5-9db0-8d03103c990e-utilities\") pod \"community-operators-zflvq\" (UID: \"317b5076-0f62-45e5-9db0-8d03103c990e\") " pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.152459 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317b5076-0f62-45e5-9db0-8d03103c990e-catalog-content\") pod \"community-operators-zflvq\" (UID: \"317b5076-0f62-45e5-9db0-8d03103c990e\") " pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.152954 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317b5076-0f62-45e5-9db0-8d03103c990e-catalog-content\") pod \"community-operators-zflvq\" (UID: \"317b5076-0f62-45e5-9db0-8d03103c990e\") " pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:03:34 crc kubenswrapper[4732]: E0131 09:03:34.153042 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.653019427 +0000 UTC m=+152.958895621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.153558 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317b5076-0f62-45e5-9db0-8d03103c990e-utilities\") pod \"community-operators-zflvq\" (UID: \"317b5076-0f62-45e5-9db0-8d03103c990e\") " pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.178774 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p95zf\" (UniqueName: \"kubernetes.io/projected/317b5076-0f62-45e5-9db0-8d03103c990e-kube-api-access-p95zf\") pod \"community-operators-zflvq\" (UID: \"317b5076-0f62-45e5-9db0-8d03103c990e\") " pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.244481 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.273133 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2f4q\" (UniqueName: \"kubernetes.io/projected/7006b68f-caf9-44a9-a6df-26e7b594b931-kube-api-access-m2f4q\") pod \"certified-operators-6z6vm\" (UID: \"7006b68f-caf9-44a9-a6df-26e7b594b931\") " pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.273190 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7006b68f-caf9-44a9-a6df-26e7b594b931-catalog-content\") pod \"certified-operators-6z6vm\" (UID: \"7006b68f-caf9-44a9-a6df-26e7b594b931\") " pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.273234 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.273312 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7006b68f-caf9-44a9-a6df-26e7b594b931-utilities\") pod \"certified-operators-6z6vm\" (UID: \"7006b68f-caf9-44a9-a6df-26e7b594b931\") " pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:03:34 crc kubenswrapper[4732]: E0131 09:03:34.274008 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.773992277 +0000 UTC m=+153.079868481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.293936 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rtg8l"] Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.374434 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.374935 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:34 crc kubenswrapper[4732]: E0131 09:03:34.374965 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.874926554 +0000 UTC m=+153.180802758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.375160 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.375222 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7006b68f-caf9-44a9-a6df-26e7b594b931-utilities\") pod \"certified-operators-6z6vm\" (UID: \"7006b68f-caf9-44a9-a6df-26e7b594b931\") " pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.375424 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2f4q\" (UniqueName: \"kubernetes.io/projected/7006b68f-caf9-44a9-a6df-26e7b594b931-kube-api-access-m2f4q\") pod \"certified-operators-6z6vm\" (UID: \"7006b68f-caf9-44a9-a6df-26e7b594b931\") " pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.375443 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.375467 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7006b68f-caf9-44a9-a6df-26e7b594b931-catalog-content\") pod \"certified-operators-6z6vm\" (UID: \"7006b68f-caf9-44a9-a6df-26e7b594b931\") " pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:03:34 crc kubenswrapper[4732]: E0131 09:03:34.376360 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.876342602 +0000 UTC m=+153.182218886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.376396 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7006b68f-caf9-44a9-a6df-26e7b594b931-utilities\") pod \"certified-operators-6z6vm\" (UID: \"7006b68f-caf9-44a9-a6df-26e7b594b931\") " pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.376555 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7006b68f-caf9-44a9-a6df-26e7b594b931-catalog-content\") pod \"certified-operators-6z6vm\" (UID: \"7006b68f-caf9-44a9-a6df-26e7b594b931\") " pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.397555 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-76d6v" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.399585 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2f4q\" (UniqueName: \"kubernetes.io/projected/7006b68f-caf9-44a9-a6df-26e7b594b931-kube-api-access-m2f4q\") pod \"certified-operators-6z6vm\" (UID: \"7006b68f-caf9-44a9-a6df-26e7b594b931\") " pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.457990 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.465855 4732 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xprfh container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 09:03:34 crc kubenswrapper[4732]: [+]log ok Jan 31 09:03:34 crc kubenswrapper[4732]: [+]etcd ok Jan 31 09:03:34 crc kubenswrapper[4732]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 09:03:34 crc kubenswrapper[4732]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 09:03:34 crc kubenswrapper[4732]: [+]poststarthook/max-in-flight-filter ok Jan 31 09:03:34 crc kubenswrapper[4732]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 09:03:34 crc kubenswrapper[4732]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 31 09:03:34 crc kubenswrapper[4732]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 31 09:03:34 crc kubenswrapper[4732]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 31 09:03:34 crc kubenswrapper[4732]: [+]poststarthook/project.openshift.io-projectcache ok Jan 31 09:03:34 crc kubenswrapper[4732]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 31 09:03:34 crc kubenswrapper[4732]: [+]poststarthook/openshift.io-startinformers ok Jan 31 09:03:34 crc kubenswrapper[4732]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 31 09:03:34 crc kubenswrapper[4732]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 09:03:34 crc kubenswrapper[4732]: livez check failed Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.465932 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xprfh" podUID="81b523ca-b564-45d4-bad5-f7e236f2e6d0" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.476849 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:34 crc kubenswrapper[4732]: E0131 09:03:34.478178 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:34.978155388 +0000 UTC m=+153.284031602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.513964 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtg8l" event={"ID":"320c2656-6f30-4922-835e-8c27a82800b1","Type":"ContainerStarted","Data":"9a69e31450d3f2f57f031d90e49b124d53417614f5829c208f10b5b95ffda670"} Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.514036 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtg8l" event={"ID":"320c2656-6f30-4922-835e-8c27a82800b1","Type":"ContainerStarted","Data":"7ba9bda66cde21334a1fb904223442bb2f107c035dd197b8f6160f7ac322e79d"} Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.518673 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.518718 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.522599 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c7j22" event={"ID":"89cec875-cd1f-4867-8b4b-ca72c57c974b","Type":"ContainerStarted","Data":"352c6c95571c5d3cf1574f5d988513fb0ff48b49c2005e76ade9b33d9b508cf0"} Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.522641 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c7j22" event={"ID":"89cec875-cd1f-4867-8b4b-ca72c57c974b","Type":"ContainerStarted","Data":"a89fca39fecdd4183eef1ef80c7dc34f7605dbd4118ccd4f532fe86d73f32a03"} Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.522656 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c7j22" event={"ID":"89cec875-cd1f-4867-8b4b-ca72c57c974b","Type":"ContainerStarted","Data":"afd80b2706ba2e95e643c506800c8a0b7a4cc44f331f16f3cc71423f03107efc"} Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.556799 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.588274 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:34 crc kubenswrapper[4732]: E0131 09:03:34.588583 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:35.088562533 +0000 UTC m=+153.394438737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.600585 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zflvq"] Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.633929 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gb54f"] Jan 31 09:03:34 crc kubenswrapper[4732]: W0131 09:03:34.660445 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod317b5076_0f62_45e5_9db0_8d03103c990e.slice/crio-e3e3c2c80d6bcc6eb2c4ab51462ea9a6f6a150eb54a71c100214dd3061f85429 WatchSource:0}: Error finding container e3e3c2c80d6bcc6eb2c4ab51462ea9a6f6a150eb54a71c100214dd3061f85429: Status 404 returned error can't find the container with id e3e3c2c80d6bcc6eb2c4ab51462ea9a6f6a150eb54a71c100214dd3061f85429 Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.690573 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:34 crc kubenswrapper[4732]: E0131 09:03:34.690841 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 09:03:35.190803314 +0000 UTC m=+153.496679518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.691179 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:34 crc kubenswrapper[4732]: E0131 09:03:34.691535 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 09:03:35.191520928 +0000 UTC m=+153.497397132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-99dtb" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.739509 4732 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T09:03:33.990545659Z","Handler":null,"Name":""} Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.748538 4732 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.748578 4732 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.794371 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.799545 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.895844 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.899978 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6z6vm"] Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.905947 4732 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.906006 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.930639 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-99dtb\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:34 crc kubenswrapper[4732]: W0131 09:03:34.946623 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7006b68f_caf9_44a9_a6df_26e7b594b931.slice/crio-57ca1e232ee4486e6e4d54b1df45d6e963d8c6e1b497c54651b39471c8475a9c WatchSource:0}: Error finding container 57ca1e232ee4486e6e4d54b1df45d6e963d8c6e1b497c54651b39471c8475a9c: Status 404 returned error can't find the container with id 57ca1e232ee4486e6e4d54b1df45d6e963d8c6e1b497c54651b39471c8475a9c Jan 31 09:03:34 crc kubenswrapper[4732]: I0131 09:03:34.951320 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.002892 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:35 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:35 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:35 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.002984 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.227065 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-99dtb"] Jan 31 09:03:35 crc kubenswrapper[4732]: W0131 09:03:35.261771 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac602fa_14af_4ae0_a538_d73e938db036.slice/crio-039e3167eddd030475a90d04176c40d3799eaa481a42473d70722bc67a78215e WatchSource:0}: Error finding container 039e3167eddd030475a90d04176c40d3799eaa481a42473d70722bc67a78215e: Status 404 returned error can't find the container with id 039e3167eddd030475a90d04176c40d3799eaa481a42473d70722bc67a78215e Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.285050 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.285824 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.287926 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.288451 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.298695 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.345138 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.346302 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.347589 4732 patch_prober.go:28] interesting pod/console-f9d7485db-8t8ks container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.347631 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8t8ks" podUID="b35d0df8-53f0-4787-b0b4-c93be28f0127" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.403890 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f6e5b3b-035c-42f1-a6e9-cc4614504712-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8f6e5b3b-035c-42f1-a6e9-cc4614504712\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.403944 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f6e5b3b-035c-42f1-a6e9-cc4614504712-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8f6e5b3b-035c-42f1-a6e9-cc4614504712\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.505644 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f6e5b3b-035c-42f1-a6e9-cc4614504712-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8f6e5b3b-035c-42f1-a6e9-cc4614504712\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.505724 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f6e5b3b-035c-42f1-a6e9-cc4614504712-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8f6e5b3b-035c-42f1-a6e9-cc4614504712\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.505850 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f6e5b3b-035c-42f1-a6e9-cc4614504712-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8f6e5b3b-035c-42f1-a6e9-cc4614504712\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.506824 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d7ngt"] Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.508095 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.510064 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.529263 4732 generic.go:334] "Generic (PLEG): container finished" podID="7006b68f-caf9-44a9-a6df-26e7b594b931" containerID="b38eca76de0c7fcb760be9dbb97b202ed5f6069cdb395b15ae3017074d198d5e" exitCode=0 Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.529391 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6z6vm" event={"ID":"7006b68f-caf9-44a9-a6df-26e7b594b931","Type":"ContainerDied","Data":"b38eca76de0c7fcb760be9dbb97b202ed5f6069cdb395b15ae3017074d198d5e"} Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.529431 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6z6vm" event={"ID":"7006b68f-caf9-44a9-a6df-26e7b594b931","Type":"ContainerStarted","Data":"57ca1e232ee4486e6e4d54b1df45d6e963d8c6e1b497c54651b39471c8475a9c"} Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.530297 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7ngt"] Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.531639 4732 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.532844 4732 generic.go:334] "Generic (PLEG): container finished" podID="320c2656-6f30-4922-835e-8c27a82800b1" containerID="9a69e31450d3f2f57f031d90e49b124d53417614f5829c208f10b5b95ffda670" exitCode=0 Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.532915 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtg8l" event={"ID":"320c2656-6f30-4922-835e-8c27a82800b1","Type":"ContainerDied","Data":"9a69e31450d3f2f57f031d90e49b124d53417614f5829c208f10b5b95ffda670"} Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.534572 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f6e5b3b-035c-42f1-a6e9-cc4614504712-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8f6e5b3b-035c-42f1-a6e9-cc4614504712\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.537730 4732 generic.go:334] "Generic (PLEG): container finished" podID="317b5076-0f62-45e5-9db0-8d03103c990e" containerID="d955c74da8ffd285d20400dc34dfd51736c439bd9e7f63e99e5270665cbbadb8" exitCode=0 Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.537823 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zflvq" event={"ID":"317b5076-0f62-45e5-9db0-8d03103c990e","Type":"ContainerDied","Data":"d955c74da8ffd285d20400dc34dfd51736c439bd9e7f63e99e5270665cbbadb8"} Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.537865 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zflvq" event={"ID":"317b5076-0f62-45e5-9db0-8d03103c990e","Type":"ContainerStarted","Data":"e3e3c2c80d6bcc6eb2c4ab51462ea9a6f6a150eb54a71c100214dd3061f85429"} Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.539294 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" event={"ID":"4ac602fa-14af-4ae0-a538-d73e938db036","Type":"ContainerStarted","Data":"76229ced9d7ea551eb476a7db3e9648ede271e5ea762d59f5c012cdd16284033"} Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.539321 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" event={"ID":"4ac602fa-14af-4ae0-a538-d73e938db036","Type":"ContainerStarted","Data":"039e3167eddd030475a90d04176c40d3799eaa481a42473d70722bc67a78215e"} Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.540699 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.550540 4732 generic.go:334] "Generic (PLEG): container finished" podID="111ca852-fddd-4fb1-8d5d-331fd5921a71" containerID="b12c70aaf2deee35af9c59e7cdfeb63c5438b4e4668e33b49b029529e03925eb" exitCode=0 Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.551785 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gb54f" event={"ID":"111ca852-fddd-4fb1-8d5d-331fd5921a71","Type":"ContainerDied","Data":"b12c70aaf2deee35af9c59e7cdfeb63c5438b4e4668e33b49b029529e03925eb"} Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.551867 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gb54f" event={"ID":"111ca852-fddd-4fb1-8d5d-331fd5921a71","Type":"ContainerStarted","Data":"2852e87c3d90e55a145bff8290e54ac7389fb8f75ebcdc35c688bd52463d5985"} Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.553991 4732 generic.go:334] "Generic (PLEG): container finished" podID="0558933a-c8d6-45dc-aeaf-af86190b15a0" containerID="940d163c1a1a494d9850589a935618158e47c219b1ef2186264ecbca1a2bfccc" exitCode=0 Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.554838 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" event={"ID":"0558933a-c8d6-45dc-aeaf-af86190b15a0","Type":"ContainerDied","Data":"940d163c1a1a494d9850589a935618158e47c219b1ef2186264ecbca1a2bfccc"} Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.561012 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-85mvk" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.612026 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rk8\" (UniqueName: \"kubernetes.io/projected/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-kube-api-access-88rk8\") pod \"redhat-marketplace-d7ngt\" (UID: \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\") " pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.612803 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-catalog-content\") pod \"redhat-marketplace-d7ngt\" (UID: \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\") " pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.613019 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-utilities\") pod \"redhat-marketplace-d7ngt\" (UID: \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\") " pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.622611 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" podStartSLOduration=128.6225891 podStartE2EDuration="2m8.6225891s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:35.614828299 +0000 UTC m=+153.920704513" watchObservedRunningTime="2026-01-31 09:03:35.6225891 +0000 UTC m=+153.928465304" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.642071 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.714166 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88rk8\" (UniqueName: \"kubernetes.io/projected/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-kube-api-access-88rk8\") pod \"redhat-marketplace-d7ngt\" (UID: \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\") " pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.714245 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-catalog-content\") pod \"redhat-marketplace-d7ngt\" (UID: \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\") " pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.714301 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-utilities\") pod \"redhat-marketplace-d7ngt\" (UID: \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\") " pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.715011 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-utilities\") pod \"redhat-marketplace-d7ngt\" (UID: \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\") " pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.715654 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-catalog-content\") pod \"redhat-marketplace-d7ngt\" (UID: \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\") " pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.750822 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rk8\" (UniqueName: \"kubernetes.io/projected/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-kube-api-access-88rk8\") pod \"redhat-marketplace-d7ngt\" (UID: \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\") " pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.785146 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-c7j22" podStartSLOduration=13.78512754 podStartE2EDuration="13.78512754s" podCreationTimestamp="2026-01-31 09:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:35.780304897 +0000 UTC m=+154.086181101" watchObservedRunningTime="2026-01-31 09:03:35.78512754 +0000 UTC m=+154.091003734" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.822224 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.929816 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2jzzz"] Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.930912 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:03:35 crc kubenswrapper[4732]: I0131 09:03:35.951083 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jzzz"] Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:35.999926 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.010198 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:36 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:36 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:36 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.010284 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.022999 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60fab354-a742-4d49-88d9-22843a857ea5-utilities\") pod \"redhat-marketplace-2jzzz\" (UID: \"60fab354-a742-4d49-88d9-22843a857ea5\") " pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.023112 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60fab354-a742-4d49-88d9-22843a857ea5-catalog-content\") pod \"redhat-marketplace-2jzzz\" (UID: \"60fab354-a742-4d49-88d9-22843a857ea5\") " pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.023134 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm96d\" (UniqueName: \"kubernetes.io/projected/60fab354-a742-4d49-88d9-22843a857ea5-kube-api-access-nm96d\") pod \"redhat-marketplace-2jzzz\" (UID: \"60fab354-a742-4d49-88d9-22843a857ea5\") " pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.023343 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7fqsl" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.081918 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wb8wq" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.097201 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.140720 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60fab354-a742-4d49-88d9-22843a857ea5-utilities\") pod \"redhat-marketplace-2jzzz\" (UID: \"60fab354-a742-4d49-88d9-22843a857ea5\") " pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.140831 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60fab354-a742-4d49-88d9-22843a857ea5-catalog-content\") pod \"redhat-marketplace-2jzzz\" (UID: \"60fab354-a742-4d49-88d9-22843a857ea5\") " pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.140871 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm96d\" (UniqueName: \"kubernetes.io/projected/60fab354-a742-4d49-88d9-22843a857ea5-kube-api-access-nm96d\") pod \"redhat-marketplace-2jzzz\" (UID: \"60fab354-a742-4d49-88d9-22843a857ea5\") " pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.142957 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60fab354-a742-4d49-88d9-22843a857ea5-utilities\") pod \"redhat-marketplace-2jzzz\" (UID: \"60fab354-a742-4d49-88d9-22843a857ea5\") " pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.145933 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60fab354-a742-4d49-88d9-22843a857ea5-catalog-content\") pod \"redhat-marketplace-2jzzz\" (UID: \"60fab354-a742-4d49-88d9-22843a857ea5\") " pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.190440 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm96d\" (UniqueName: \"kubernetes.io/projected/60fab354-a742-4d49-88d9-22843a857ea5-kube-api-access-nm96d\") pod \"redhat-marketplace-2jzzz\" (UID: \"60fab354-a742-4d49-88d9-22843a857ea5\") " pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.260465 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.273549 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.359788 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6fwnh" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.556556 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.557106 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7ngt"] Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.565057 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8f6e5b3b-035c-42f1-a6e9-cc4614504712","Type":"ContainerStarted","Data":"3a6201eb755c1ceee6a6ee3a12601a491d4c76b48fb722418a24e1228f4d3bd9"} Jan 31 09:03:36 crc kubenswrapper[4732]: W0131 09:03:36.574027 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d6ffd83_fb99_48e0_a34a_fd365f971ef1.slice/crio-628f0999201f984caa436318589dd8628cafb6a41db0673c62163c5cc780a5ff WatchSource:0}: Error finding container 628f0999201f984caa436318589dd8628cafb6a41db0673c62163c5cc780a5ff: Status 404 returned error can't find the container with id 628f0999201f984caa436318589dd8628cafb6a41db0673c62163c5cc780a5ff Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.574393 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jzzz"] Jan 31 09:03:36 crc kubenswrapper[4732]: W0131 09:03:36.581043 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60fab354_a742_4d49_88d9_22843a857ea5.slice/crio-9ea503aa2d078ea5c77d132dad08c175b5d8c3c762d663e871a3d782440f1d6e WatchSource:0}: Error finding container 9ea503aa2d078ea5c77d132dad08c175b5d8c3c762d663e871a3d782440f1d6e: Status 404 returned error can't find the container with id 9ea503aa2d078ea5c77d132dad08c175b5d8c3c762d663e871a3d782440f1d6e Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.706303 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vdcdv"] Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.707634 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.710026 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.723328 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vdcdv"] Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.748903 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dzc9\" (UniqueName: \"kubernetes.io/projected/b03cae03-72c1-4b13-8031-33381e6df48a-kube-api-access-2dzc9\") pod \"redhat-operators-vdcdv\" (UID: \"b03cae03-72c1-4b13-8031-33381e6df48a\") " pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.749108 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03cae03-72c1-4b13-8031-33381e6df48a-utilities\") pod \"redhat-operators-vdcdv\" (UID: \"b03cae03-72c1-4b13-8031-33381e6df48a\") " pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.749168 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03cae03-72c1-4b13-8031-33381e6df48a-catalog-content\") pod \"redhat-operators-vdcdv\" (UID: \"b03cae03-72c1-4b13-8031-33381e6df48a\") " pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.751408 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.849785 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0558933a-c8d6-45dc-aeaf-af86190b15a0-secret-volume\") pod \"0558933a-c8d6-45dc-aeaf-af86190b15a0\" (UID: \"0558933a-c8d6-45dc-aeaf-af86190b15a0\") " Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.849897 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0558933a-c8d6-45dc-aeaf-af86190b15a0-config-volume\") pod \"0558933a-c8d6-45dc-aeaf-af86190b15a0\" (UID: \"0558933a-c8d6-45dc-aeaf-af86190b15a0\") " Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.849949 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txg4c\" (UniqueName: \"kubernetes.io/projected/0558933a-c8d6-45dc-aeaf-af86190b15a0-kube-api-access-txg4c\") pod \"0558933a-c8d6-45dc-aeaf-af86190b15a0\" (UID: \"0558933a-c8d6-45dc-aeaf-af86190b15a0\") " Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.850176 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dzc9\" (UniqueName: \"kubernetes.io/projected/b03cae03-72c1-4b13-8031-33381e6df48a-kube-api-access-2dzc9\") pod \"redhat-operators-vdcdv\" (UID: \"b03cae03-72c1-4b13-8031-33381e6df48a\") " pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.850240 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03cae03-72c1-4b13-8031-33381e6df48a-utilities\") pod \"redhat-operators-vdcdv\" (UID: \"b03cae03-72c1-4b13-8031-33381e6df48a\") " pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.850266 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03cae03-72c1-4b13-8031-33381e6df48a-catalog-content\") pod \"redhat-operators-vdcdv\" (UID: \"b03cae03-72c1-4b13-8031-33381e6df48a\") " pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.850659 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0558933a-c8d6-45dc-aeaf-af86190b15a0-config-volume" (OuterVolumeSpecName: "config-volume") pod "0558933a-c8d6-45dc-aeaf-af86190b15a0" (UID: "0558933a-c8d6-45dc-aeaf-af86190b15a0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.851060 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03cae03-72c1-4b13-8031-33381e6df48a-catalog-content\") pod \"redhat-operators-vdcdv\" (UID: \"b03cae03-72c1-4b13-8031-33381e6df48a\") " pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.851103 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03cae03-72c1-4b13-8031-33381e6df48a-utilities\") pod \"redhat-operators-vdcdv\" (UID: \"b03cae03-72c1-4b13-8031-33381e6df48a\") " pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.856757 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0558933a-c8d6-45dc-aeaf-af86190b15a0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0558933a-c8d6-45dc-aeaf-af86190b15a0" (UID: "0558933a-c8d6-45dc-aeaf-af86190b15a0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.857120 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0558933a-c8d6-45dc-aeaf-af86190b15a0-kube-api-access-txg4c" (OuterVolumeSpecName: "kube-api-access-txg4c") pod "0558933a-c8d6-45dc-aeaf-af86190b15a0" (UID: "0558933a-c8d6-45dc-aeaf-af86190b15a0"). InnerVolumeSpecName "kube-api-access-txg4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.871823 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dzc9\" (UniqueName: \"kubernetes.io/projected/b03cae03-72c1-4b13-8031-33381e6df48a-kube-api-access-2dzc9\") pod \"redhat-operators-vdcdv\" (UID: \"b03cae03-72c1-4b13-8031-33381e6df48a\") " pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.951838 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txg4c\" (UniqueName: \"kubernetes.io/projected/0558933a-c8d6-45dc-aeaf-af86190b15a0-kube-api-access-txg4c\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.951908 4732 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0558933a-c8d6-45dc-aeaf-af86190b15a0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:36 crc kubenswrapper[4732]: I0131 09:03:36.951918 4732 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0558933a-c8d6-45dc-aeaf-af86190b15a0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.002146 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:37 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:37 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:37 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.002231 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.029355 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.117011 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zm4tc"] Jan 31 09:03:37 crc kubenswrapper[4732]: E0131 09:03:37.117266 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0558933a-c8d6-45dc-aeaf-af86190b15a0" containerName="collect-profiles" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.117280 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="0558933a-c8d6-45dc-aeaf-af86190b15a0" containerName="collect-profiles" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.117425 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="0558933a-c8d6-45dc-aeaf-af86190b15a0" containerName="collect-profiles" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.118357 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.131732 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zm4tc"] Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.154018 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21393f97-49f1-4f27-a24c-93f88fe6596b-utilities\") pod \"redhat-operators-zm4tc\" (UID: \"21393f97-49f1-4f27-a24c-93f88fe6596b\") " pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.154099 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzkj6\" (UniqueName: \"kubernetes.io/projected/21393f97-49f1-4f27-a24c-93f88fe6596b-kube-api-access-rzkj6\") pod \"redhat-operators-zm4tc\" (UID: \"21393f97-49f1-4f27-a24c-93f88fe6596b\") " pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.154187 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21393f97-49f1-4f27-a24c-93f88fe6596b-catalog-content\") pod \"redhat-operators-zm4tc\" (UID: \"21393f97-49f1-4f27-a24c-93f88fe6596b\") " pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.238407 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vdcdv"] Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.255268 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzkj6\" (UniqueName: \"kubernetes.io/projected/21393f97-49f1-4f27-a24c-93f88fe6596b-kube-api-access-rzkj6\") pod \"redhat-operators-zm4tc\" (UID: \"21393f97-49f1-4f27-a24c-93f88fe6596b\") " pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.255344 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21393f97-49f1-4f27-a24c-93f88fe6596b-catalog-content\") pod \"redhat-operators-zm4tc\" (UID: \"21393f97-49f1-4f27-a24c-93f88fe6596b\") " pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.255413 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21393f97-49f1-4f27-a24c-93f88fe6596b-utilities\") pod \"redhat-operators-zm4tc\" (UID: \"21393f97-49f1-4f27-a24c-93f88fe6596b\") " pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:03:37 crc kubenswrapper[4732]: W0131 09:03:37.255694 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb03cae03_72c1_4b13_8031_33381e6df48a.slice/crio-b2573936f6c20f59b9e6d6e1b605a68abfd082c81e607c5222d95cd6ee9a3427 WatchSource:0}: Error finding container b2573936f6c20f59b9e6d6e1b605a68abfd082c81e607c5222d95cd6ee9a3427: Status 404 returned error can't find the container with id b2573936f6c20f59b9e6d6e1b605a68abfd082c81e607c5222d95cd6ee9a3427 Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.256011 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21393f97-49f1-4f27-a24c-93f88fe6596b-utilities\") pod \"redhat-operators-zm4tc\" (UID: \"21393f97-49f1-4f27-a24c-93f88fe6596b\") " pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.256082 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21393f97-49f1-4f27-a24c-93f88fe6596b-catalog-content\") pod \"redhat-operators-zm4tc\" (UID: \"21393f97-49f1-4f27-a24c-93f88fe6596b\") " pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.277165 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzkj6\" (UniqueName: \"kubernetes.io/projected/21393f97-49f1-4f27-a24c-93f88fe6596b-kube-api-access-rzkj6\") pod \"redhat-operators-zm4tc\" (UID: \"21393f97-49f1-4f27-a24c-93f88fe6596b\") " pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.438121 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.574106 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.574105 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497500-p2gn7" event={"ID":"0558933a-c8d6-45dc-aeaf-af86190b15a0","Type":"ContainerDied","Data":"489c0c3b171c6d96e855fb1f2464c3d1486c21baf174589add9247f92fff3bcc"} Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.574628 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="489c0c3b171c6d96e855fb1f2464c3d1486c21baf174589add9247f92fff3bcc" Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.575748 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdcdv" event={"ID":"b03cae03-72c1-4b13-8031-33381e6df48a","Type":"ContainerStarted","Data":"b2573936f6c20f59b9e6d6e1b605a68abfd082c81e607c5222d95cd6ee9a3427"} Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.577537 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jzzz" event={"ID":"60fab354-a742-4d49-88d9-22843a857ea5","Type":"ContainerStarted","Data":"5530625f265d27e331890e3d6915df6f40e55b5333079f815cda7a875fb373a3"} Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.577581 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jzzz" event={"ID":"60fab354-a742-4d49-88d9-22843a857ea5","Type":"ContainerStarted","Data":"9ea503aa2d078ea5c77d132dad08c175b5d8c3c762d663e871a3d782440f1d6e"} Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.579162 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8f6e5b3b-035c-42f1-a6e9-cc4614504712","Type":"ContainerStarted","Data":"a362685bd59e615a1f2234cfabf15292c569459eb6135ba459f367652836aa93"} Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.584794 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7ngt" event={"ID":"3d6ffd83-fb99-48e0-a34a-fd365f971ef1","Type":"ContainerStarted","Data":"628f0999201f984caa436318589dd8628cafb6a41db0673c62163c5cc780a5ff"} Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.641469 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zm4tc"] Jan 31 09:03:37 crc kubenswrapper[4732]: W0131 09:03:37.699249 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21393f97_49f1_4f27_a24c_93f88fe6596b.slice/crio-72ec2ce1dae672abaa2d0accf204fcab615644a223c21af740bfdd14a5e3a998 WatchSource:0}: Error finding container 72ec2ce1dae672abaa2d0accf204fcab615644a223c21af740bfdd14a5e3a998: Status 404 returned error can't find the container with id 72ec2ce1dae672abaa2d0accf204fcab615644a223c21af740bfdd14a5e3a998 Jan 31 09:03:37 crc kubenswrapper[4732]: I0131 09:03:37.855319 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-f78bs" Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.009912 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:38 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:38 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:38 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.009988 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.593173 4732 generic.go:334] "Generic (PLEG): container finished" podID="b03cae03-72c1-4b13-8031-33381e6df48a" containerID="4468cd510de754308c18a9a7af365ee982a0fdce8768e86c8b2291d7c729cfc1" exitCode=0 Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.593937 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdcdv" event={"ID":"b03cae03-72c1-4b13-8031-33381e6df48a","Type":"ContainerDied","Data":"4468cd510de754308c18a9a7af365ee982a0fdce8768e86c8b2291d7c729cfc1"} Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.596790 4732 generic.go:334] "Generic (PLEG): container finished" podID="60fab354-a742-4d49-88d9-22843a857ea5" containerID="5530625f265d27e331890e3d6915df6f40e55b5333079f815cda7a875fb373a3" exitCode=0 Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.597370 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jzzz" event={"ID":"60fab354-a742-4d49-88d9-22843a857ea5","Type":"ContainerDied","Data":"5530625f265d27e331890e3d6915df6f40e55b5333079f815cda7a875fb373a3"} Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.600437 4732 generic.go:334] "Generic (PLEG): container finished" podID="8f6e5b3b-035c-42f1-a6e9-cc4614504712" containerID="a362685bd59e615a1f2234cfabf15292c569459eb6135ba459f367652836aa93" exitCode=0 Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.600564 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8f6e5b3b-035c-42f1-a6e9-cc4614504712","Type":"ContainerDied","Data":"a362685bd59e615a1f2234cfabf15292c569459eb6135ba459f367652836aa93"} Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.603895 4732 generic.go:334] "Generic (PLEG): container finished" podID="21393f97-49f1-4f27-a24c-93f88fe6596b" containerID="b61b2dcac543f0ef563e0d943499efa08ba5e1c076661ef3c5b44487b1cd0867" exitCode=0 Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.604056 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm4tc" event={"ID":"21393f97-49f1-4f27-a24c-93f88fe6596b","Type":"ContainerDied","Data":"b61b2dcac543f0ef563e0d943499efa08ba5e1c076661ef3c5b44487b1cd0867"} Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.604213 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm4tc" event={"ID":"21393f97-49f1-4f27-a24c-93f88fe6596b","Type":"ContainerStarted","Data":"72ec2ce1dae672abaa2d0accf204fcab615644a223c21af740bfdd14a5e3a998"} Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.619585 4732 generic.go:334] "Generic (PLEG): container finished" podID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" containerID="c749f49cfbc861f45b12320940264abaaf152474c82e669652462baab6c5bea1" exitCode=0 Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.619627 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7ngt" event={"ID":"3d6ffd83-fb99-48e0-a34a-fd365f971ef1","Type":"ContainerDied","Data":"c749f49cfbc861f45b12320940264abaaf152474c82e669652462baab6c5bea1"} Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.973899 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.974910 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.978805 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.979083 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 09:03:38 crc kubenswrapper[4732]: I0131 09:03:38.999843 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 09:03:39 crc kubenswrapper[4732]: I0131 09:03:39.002585 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:39 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:39 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:39 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:39 crc kubenswrapper[4732]: I0131 09:03:39.002677 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:39 crc kubenswrapper[4732]: I0131 09:03:39.088868 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0285bd11-6fe5-4206-9242-d008dde146bf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0285bd11-6fe5-4206-9242-d008dde146bf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:39 crc kubenswrapper[4732]: I0131 09:03:39.088951 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0285bd11-6fe5-4206-9242-d008dde146bf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0285bd11-6fe5-4206-9242-d008dde146bf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:39 crc kubenswrapper[4732]: I0131 09:03:39.189682 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0285bd11-6fe5-4206-9242-d008dde146bf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0285bd11-6fe5-4206-9242-d008dde146bf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:39 crc kubenswrapper[4732]: I0131 09:03:39.189776 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0285bd11-6fe5-4206-9242-d008dde146bf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0285bd11-6fe5-4206-9242-d008dde146bf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:39 crc kubenswrapper[4732]: I0131 09:03:39.189923 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0285bd11-6fe5-4206-9242-d008dde146bf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0285bd11-6fe5-4206-9242-d008dde146bf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:39 crc kubenswrapper[4732]: I0131 09:03:39.212303 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0285bd11-6fe5-4206-9242-d008dde146bf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0285bd11-6fe5-4206-9242-d008dde146bf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:39 crc kubenswrapper[4732]: I0131 09:03:39.300795 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:39 crc kubenswrapper[4732]: I0131 09:03:39.382653 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:39 crc kubenswrapper[4732]: I0131 09:03:39.398921 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xprfh" Jan 31 09:03:39 crc kubenswrapper[4732]: I0131 09:03:39.673441 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 09:03:39 crc kubenswrapper[4732]: W0131 09:03:39.748817 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0285bd11_6fe5_4206_9242_d008dde146bf.slice/crio-5415799b82fcadfba2c54e03df6a02db24ca92c8ec2cbeeab74649ebabe5394f WatchSource:0}: Error finding container 5415799b82fcadfba2c54e03df6a02db24ca92c8ec2cbeeab74649ebabe5394f: Status 404 returned error can't find the container with id 5415799b82fcadfba2c54e03df6a02db24ca92c8ec2cbeeab74649ebabe5394f Jan 31 09:03:39 crc kubenswrapper[4732]: I0131 09:03:39.981503 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.009968 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:40 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:40 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:40 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.010060 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.015782 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f6e5b3b-035c-42f1-a6e9-cc4614504712-kube-api-access\") pod \"8f6e5b3b-035c-42f1-a6e9-cc4614504712\" (UID: \"8f6e5b3b-035c-42f1-a6e9-cc4614504712\") " Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.015991 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f6e5b3b-035c-42f1-a6e9-cc4614504712-kubelet-dir\") pod \"8f6e5b3b-035c-42f1-a6e9-cc4614504712\" (UID: \"8f6e5b3b-035c-42f1-a6e9-cc4614504712\") " Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.016471 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f6e5b3b-035c-42f1-a6e9-cc4614504712-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8f6e5b3b-035c-42f1-a6e9-cc4614504712" (UID: "8f6e5b3b-035c-42f1-a6e9-cc4614504712"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.017430 4732 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f6e5b3b-035c-42f1-a6e9-cc4614504712-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.024393 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6e5b3b-035c-42f1-a6e9-cc4614504712-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8f6e5b3b-035c-42f1-a6e9-cc4614504712" (UID: "8f6e5b3b-035c-42f1-a6e9-cc4614504712"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.118374 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f6e5b3b-035c-42f1-a6e9-cc4614504712-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.689180 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0285bd11-6fe5-4206-9242-d008dde146bf","Type":"ContainerStarted","Data":"639c64b6ae585a9ef8feca7c751071f95968131e8f1617d3627410b4bf02ba7f"} Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.689834 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0285bd11-6fe5-4206-9242-d008dde146bf","Type":"ContainerStarted","Data":"5415799b82fcadfba2c54e03df6a02db24ca92c8ec2cbeeab74649ebabe5394f"} Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.698337 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8f6e5b3b-035c-42f1-a6e9-cc4614504712","Type":"ContainerDied","Data":"3a6201eb755c1ceee6a6ee3a12601a491d4c76b48fb722418a24e1228f4d3bd9"} Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.698400 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a6201eb755c1ceee6a6ee3a12601a491d4c76b48fb722418a24e1228f4d3bd9" Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.698412 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 09:03:40 crc kubenswrapper[4732]: I0131 09:03:40.704993 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.704957181 podStartE2EDuration="2.704957181s" podCreationTimestamp="2026-01-31 09:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:40.701103392 +0000 UTC m=+159.006979596" watchObservedRunningTime="2026-01-31 09:03:40.704957181 +0000 UTC m=+159.010833385" Jan 31 09:03:41 crc kubenswrapper[4732]: I0131 09:03:41.000953 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:41 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:41 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:41 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:41 crc kubenswrapper[4732]: I0131 09:03:41.001047 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:41 crc kubenswrapper[4732]: I0131 09:03:41.733817 4732 generic.go:334] "Generic (PLEG): container finished" podID="0285bd11-6fe5-4206-9242-d008dde146bf" containerID="639c64b6ae585a9ef8feca7c751071f95968131e8f1617d3627410b4bf02ba7f" exitCode=0 Jan 31 09:03:41 crc kubenswrapper[4732]: I0131 09:03:41.733993 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0285bd11-6fe5-4206-9242-d008dde146bf","Type":"ContainerDied","Data":"639c64b6ae585a9ef8feca7c751071f95968131e8f1617d3627410b4bf02ba7f"} Jan 31 09:03:42 crc kubenswrapper[4732]: I0131 09:03:42.002808 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:42 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:42 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:42 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:42 crc kubenswrapper[4732]: I0131 09:03:42.002897 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:43 crc kubenswrapper[4732]: I0131 09:03:42.999673 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:43 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:43 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:43 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:43 crc kubenswrapper[4732]: I0131 09:03:43.000143 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:44 crc kubenswrapper[4732]: I0131 09:03:44.027918 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:44 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:44 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:44 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:44 crc kubenswrapper[4732]: I0131 09:03:44.028283 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:44 crc kubenswrapper[4732]: I0131 09:03:44.127013 4732 patch_prober.go:28] interesting pod/downloads-7954f5f757-rt2jr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 31 09:03:44 crc kubenswrapper[4732]: I0131 09:03:44.127068 4732 patch_prober.go:28] interesting pod/downloads-7954f5f757-rt2jr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 31 09:03:44 crc kubenswrapper[4732]: I0131 09:03:44.127092 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rt2jr" podUID="81e1781e-a935-4f3f-b2aa-9a0807f43c73" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 31 09:03:44 crc kubenswrapper[4732]: I0131 09:03:44.127131 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rt2jr" podUID="81e1781e-a935-4f3f-b2aa-9a0807f43c73" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 31 09:03:45 crc kubenswrapper[4732]: I0131 09:03:44.999967 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:45 crc kubenswrapper[4732]: [-]has-synced failed: reason withheld Jan 31 09:03:45 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:45 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:45 crc kubenswrapper[4732]: I0131 09:03:45.000038 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:45 crc kubenswrapper[4732]: I0131 09:03:45.345153 4732 patch_prober.go:28] interesting pod/console-f9d7485db-8t8ks container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 31 09:03:45 crc kubenswrapper[4732]: I0131 09:03:45.345227 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8t8ks" podUID="b35d0df8-53f0-4787-b0b4-c93be28f0127" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 31 09:03:45 crc kubenswrapper[4732]: I0131 09:03:45.999434 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:45 crc kubenswrapper[4732]: [+]has-synced ok Jan 31 09:03:45 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:45 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:45 crc kubenswrapper[4732]: I0131 09:03:45.999508 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:47 crc kubenswrapper[4732]: I0131 09:03:47.002600 4732 patch_prober.go:28] interesting pod/router-default-5444994796-h5q9f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 09:03:47 crc kubenswrapper[4732]: [+]has-synced ok Jan 31 09:03:47 crc kubenswrapper[4732]: [+]process-running ok Jan 31 09:03:47 crc kubenswrapper[4732]: healthz check failed Jan 31 09:03:47 crc kubenswrapper[4732]: I0131 09:03:47.003060 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h5q9f" podUID="8814e7c8-5104-40f7-9761-4feedc15697b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:03:47 crc kubenswrapper[4732]: I0131 09:03:47.498216 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:03:47 crc kubenswrapper[4732]: I0131 09:03:47.498311 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:03:47 crc kubenswrapper[4732]: I0131 09:03:47.999712 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:48 crc kubenswrapper[4732]: I0131 09:03:48.003538 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h5q9f" Jan 31 09:03:49 crc kubenswrapper[4732]: I0131 09:03:49.790432 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:49 crc kubenswrapper[4732]: I0131 09:03:49.810797 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:49 crc kubenswrapper[4732]: I0131 09:03:49.816045 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0285bd11-6fe5-4206-9242-d008dde146bf","Type":"ContainerDied","Data":"5415799b82fcadfba2c54e03df6a02db24ca92c8ec2cbeeab74649ebabe5394f"} Jan 31 09:03:49 crc kubenswrapper[4732]: I0131 09:03:49.816088 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5415799b82fcadfba2c54e03df6a02db24ca92c8ec2cbeeab74649ebabe5394f" Jan 31 09:03:49 crc kubenswrapper[4732]: I0131 09:03:49.816161 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 09:03:49 crc kubenswrapper[4732]: I0131 09:03:49.867702 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bd29a31-1a47-40da-afc5-6c4423067083-metrics-certs\") pod \"network-metrics-daemon-7fgvm\" (UID: \"3bd29a31-1a47-40da-afc5-6c4423067083\") " pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:49 crc kubenswrapper[4732]: I0131 09:03:49.912425 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0285bd11-6fe5-4206-9242-d008dde146bf-kubelet-dir\") pod \"0285bd11-6fe5-4206-9242-d008dde146bf\" (UID: \"0285bd11-6fe5-4206-9242-d008dde146bf\") " Jan 31 09:03:49 crc kubenswrapper[4732]: I0131 09:03:49.912486 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0285bd11-6fe5-4206-9242-d008dde146bf-kube-api-access\") pod \"0285bd11-6fe5-4206-9242-d008dde146bf\" (UID: \"0285bd11-6fe5-4206-9242-d008dde146bf\") " Jan 31 09:03:49 crc kubenswrapper[4732]: I0131 09:03:49.912918 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0285bd11-6fe5-4206-9242-d008dde146bf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0285bd11-6fe5-4206-9242-d008dde146bf" (UID: "0285bd11-6fe5-4206-9242-d008dde146bf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:03:49 crc kubenswrapper[4732]: I0131 09:03:49.916364 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0285bd11-6fe5-4206-9242-d008dde146bf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0285bd11-6fe5-4206-9242-d008dde146bf" (UID: "0285bd11-6fe5-4206-9242-d008dde146bf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:03:50 crc kubenswrapper[4732]: I0131 09:03:50.014597 4732 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0285bd11-6fe5-4206-9242-d008dde146bf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:50 crc kubenswrapper[4732]: I0131 09:03:50.014642 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0285bd11-6fe5-4206-9242-d008dde146bf-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:03:50 crc kubenswrapper[4732]: I0131 09:03:50.078345 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7fgvm" Jan 31 09:03:52 crc kubenswrapper[4732]: I0131 09:03:52.173601 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7fgvm"] Jan 31 09:03:52 crc kubenswrapper[4732]: W0131 09:03:52.177080 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bd29a31_1a47_40da_afc5_6c4423067083.slice/crio-ee556c74eedfac86d916cb65b34af21b0afa537527a3a4c1560d3aa3abefeb98 WatchSource:0}: Error finding container ee556c74eedfac86d916cb65b34af21b0afa537527a3a4c1560d3aa3abefeb98: Status 404 returned error can't find the container with id ee556c74eedfac86d916cb65b34af21b0afa537527a3a4c1560d3aa3abefeb98 Jan 31 09:03:52 crc kubenswrapper[4732]: I0131 09:03:52.833931 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" event={"ID":"3bd29a31-1a47-40da-afc5-6c4423067083","Type":"ContainerStarted","Data":"f51fea5aa94fc11ca8c52095f900fac0afbde7512c0605479a4ad661cd093d61"} Jan 31 09:03:52 crc kubenswrapper[4732]: I0131 09:03:52.834440 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" event={"ID":"3bd29a31-1a47-40da-afc5-6c4423067083","Type":"ContainerStarted","Data":"ee556c74eedfac86d916cb65b34af21b0afa537527a3a4c1560d3aa3abefeb98"} Jan 31 09:03:53 crc kubenswrapper[4732]: I0131 09:03:53.842248 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7fgvm" event={"ID":"3bd29a31-1a47-40da-afc5-6c4423067083","Type":"ContainerStarted","Data":"0a32f92f4f8121eeadd59580865703e262199790d72ed17d2599e76cb35dc3b8"} Jan 31 09:03:53 crc kubenswrapper[4732]: I0131 09:03:53.862459 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7fgvm" podStartSLOduration=146.862432866 podStartE2EDuration="2m26.862432866s" podCreationTimestamp="2026-01-31 09:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:03:53.859572549 +0000 UTC m=+172.165448773" watchObservedRunningTime="2026-01-31 09:03:53.862432866 +0000 UTC m=+172.168309070" Jan 31 09:03:54 crc kubenswrapper[4732]: I0131 09:03:54.143974 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-rt2jr" Jan 31 09:03:54 crc kubenswrapper[4732]: I0131 09:03:54.957334 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:03:55 crc kubenswrapper[4732]: I0131 09:03:55.349092 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:03:55 crc kubenswrapper[4732]: I0131 09:03:55.352747 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8t8ks" Jan 31 09:04:06 crc kubenswrapper[4732]: I0131 09:04:06.078003 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7bg" Jan 31 09:04:11 crc kubenswrapper[4732]: I0131 09:04:11.672505 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.371438 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 09:04:14 crc kubenswrapper[4732]: E0131 09:04:14.372999 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0285bd11-6fe5-4206-9242-d008dde146bf" containerName="pruner" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.373145 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="0285bd11-6fe5-4206-9242-d008dde146bf" containerName="pruner" Jan 31 09:04:14 crc kubenswrapper[4732]: E0131 09:04:14.373342 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f6e5b3b-035c-42f1-a6e9-cc4614504712" containerName="pruner" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.373431 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f6e5b3b-035c-42f1-a6e9-cc4614504712" containerName="pruner" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.373638 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="0285bd11-6fe5-4206-9242-d008dde146bf" containerName="pruner" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.373769 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f6e5b3b-035c-42f1-a6e9-cc4614504712" containerName="pruner" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.374405 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.376657 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.379512 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.385764 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.487513 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7497adea-5f95-433f-b644-9aa3eae85937-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7497adea-5f95-433f-b644-9aa3eae85937\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.487619 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7497adea-5f95-433f-b644-9aa3eae85937-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7497adea-5f95-433f-b644-9aa3eae85937\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.588696 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7497adea-5f95-433f-b644-9aa3eae85937-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7497adea-5f95-433f-b644-9aa3eae85937\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.588807 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7497adea-5f95-433f-b644-9aa3eae85937-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7497adea-5f95-433f-b644-9aa3eae85937\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.588846 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7497adea-5f95-433f-b644-9aa3eae85937-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7497adea-5f95-433f-b644-9aa3eae85937\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.611960 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7497adea-5f95-433f-b644-9aa3eae85937-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7497adea-5f95-433f-b644-9aa3eae85937\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:14 crc kubenswrapper[4732]: I0131 09:04:14.706206 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:16 crc kubenswrapper[4732]: E0131 09:04:16.640946 4732 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 09:04:16 crc kubenswrapper[4732]: E0131 09:04:16.641831 4732 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rzkj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zm4tc_openshift-marketplace(21393f97-49f1-4f27-a24c-93f88fe6596b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:16 crc kubenswrapper[4732]: E0131 09:04:16.643094 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zm4tc" podUID="21393f97-49f1-4f27-a24c-93f88fe6596b" Jan 31 09:04:17 crc kubenswrapper[4732]: I0131 09:04:17.497558 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:04:17 crc kubenswrapper[4732]: I0131 09:04:17.497746 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:04:18 crc kubenswrapper[4732]: I0131 09:04:18.369340 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 09:04:18 crc kubenswrapper[4732]: I0131 09:04:18.370347 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:18 crc kubenswrapper[4732]: I0131 09:04:18.380545 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 09:04:18 crc kubenswrapper[4732]: E0131 09:04:18.431404 4732 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 09:04:18 crc kubenswrapper[4732]: E0131 09:04:18.431606 4732 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88rk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-d7ngt_openshift-marketplace(3d6ffd83-fb99-48e0-a34a-fd365f971ef1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:18 crc kubenswrapper[4732]: E0131 09:04:18.433136 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-d7ngt" podUID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" Jan 31 09:04:18 crc kubenswrapper[4732]: I0131 09:04:18.546353 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e90ec082-a189-4726-8049-2151ddf77961-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e90ec082-a189-4726-8049-2151ddf77961\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:18 crc kubenswrapper[4732]: I0131 09:04:18.546628 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e90ec082-a189-4726-8049-2151ddf77961-kube-api-access\") pod \"installer-9-crc\" (UID: \"e90ec082-a189-4726-8049-2151ddf77961\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:18 crc kubenswrapper[4732]: I0131 09:04:18.546709 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e90ec082-a189-4726-8049-2151ddf77961-var-lock\") pod \"installer-9-crc\" (UID: \"e90ec082-a189-4726-8049-2151ddf77961\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:18 crc kubenswrapper[4732]: I0131 09:04:18.647861 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e90ec082-a189-4726-8049-2151ddf77961-kube-api-access\") pod \"installer-9-crc\" (UID: \"e90ec082-a189-4726-8049-2151ddf77961\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:18 crc kubenswrapper[4732]: I0131 09:04:18.647923 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e90ec082-a189-4726-8049-2151ddf77961-var-lock\") pod \"installer-9-crc\" (UID: \"e90ec082-a189-4726-8049-2151ddf77961\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:18 crc kubenswrapper[4732]: I0131 09:04:18.647990 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e90ec082-a189-4726-8049-2151ddf77961-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e90ec082-a189-4726-8049-2151ddf77961\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:18 crc kubenswrapper[4732]: I0131 09:04:18.648089 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e90ec082-a189-4726-8049-2151ddf77961-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e90ec082-a189-4726-8049-2151ddf77961\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:18 crc kubenswrapper[4732]: I0131 09:04:18.648081 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e90ec082-a189-4726-8049-2151ddf77961-var-lock\") pod \"installer-9-crc\" (UID: \"e90ec082-a189-4726-8049-2151ddf77961\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:18 crc kubenswrapper[4732]: I0131 09:04:18.674787 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e90ec082-a189-4726-8049-2151ddf77961-kube-api-access\") pod \"installer-9-crc\" (UID: \"e90ec082-a189-4726-8049-2151ddf77961\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:18 crc kubenswrapper[4732]: I0131 09:04:18.702598 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:04:20 crc kubenswrapper[4732]: E0131 09:04:20.209195 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-d7ngt" podUID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" Jan 31 09:04:20 crc kubenswrapper[4732]: E0131 09:04:20.209576 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zm4tc" podUID="21393f97-49f1-4f27-a24c-93f88fe6596b" Jan 31 09:04:20 crc kubenswrapper[4732]: E0131 09:04:20.330501 4732 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 09:04:20 crc kubenswrapper[4732]: E0131 09:04:20.330751 4732 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-654wm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rtg8l_openshift-marketplace(320c2656-6f30-4922-835e-8c27a82800b1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:20 crc kubenswrapper[4732]: E0131 09:04:20.331920 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rtg8l" podUID="320c2656-6f30-4922-835e-8c27a82800b1" Jan 31 09:04:20 crc kubenswrapper[4732]: E0131 09:04:20.514378 4732 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 09:04:20 crc kubenswrapper[4732]: E0131 09:04:20.514561 4732 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nm96d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2jzzz_openshift-marketplace(60fab354-a742-4d49-88d9-22843a857ea5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:20 crc kubenswrapper[4732]: E0131 09:04:20.515782 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2jzzz" podUID="60fab354-a742-4d49-88d9-22843a857ea5" Jan 31 09:04:23 crc kubenswrapper[4732]: E0131 09:04:23.381928 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rtg8l" podUID="320c2656-6f30-4922-835e-8c27a82800b1" Jan 31 09:04:23 crc kubenswrapper[4732]: E0131 09:04:23.382351 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2jzzz" podUID="60fab354-a742-4d49-88d9-22843a857ea5" Jan 31 09:04:23 crc kubenswrapper[4732]: E0131 09:04:23.519641 4732 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 09:04:23 crc kubenswrapper[4732]: E0131 09:04:23.519901 4732 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27rqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gb54f_openshift-marketplace(111ca852-fddd-4fb1-8d5d-331fd5921a71): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:23 crc kubenswrapper[4732]: E0131 09:04:23.523177 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gb54f" podUID="111ca852-fddd-4fb1-8d5d-331fd5921a71" Jan 31 09:04:23 crc kubenswrapper[4732]: E0131 09:04:23.622646 4732 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 09:04:23 crc kubenswrapper[4732]: E0131 09:04:23.622870 4732 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dzc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vdcdv_openshift-marketplace(b03cae03-72c1-4b13-8031-33381e6df48a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:23 crc kubenswrapper[4732]: E0131 09:04:23.624070 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vdcdv" podUID="b03cae03-72c1-4b13-8031-33381e6df48a" Jan 31 09:04:23 crc kubenswrapper[4732]: E0131 09:04:23.715504 4732 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 09:04:23 crc kubenswrapper[4732]: E0131 09:04:23.715689 4732 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2f4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6z6vm_openshift-marketplace(7006b68f-caf9-44a9-a6df-26e7b594b931): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:23 crc kubenswrapper[4732]: E0131 09:04:23.717010 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6z6vm" podUID="7006b68f-caf9-44a9-a6df-26e7b594b931" Jan 31 09:04:23 crc kubenswrapper[4732]: I0131 09:04:23.844279 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 09:04:23 crc kubenswrapper[4732]: I0131 09:04:23.907027 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 09:04:23 crc kubenswrapper[4732]: W0131 09:04:23.917479 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode90ec082_a189_4726_8049_2151ddf77961.slice/crio-68aff39b648c97e8da52037b14095371b6089843216d06b0916075106acf4b04 WatchSource:0}: Error finding container 68aff39b648c97e8da52037b14095371b6089843216d06b0916075106acf4b04: Status 404 returned error can't find the container with id 68aff39b648c97e8da52037b14095371b6089843216d06b0916075106acf4b04 Jan 31 09:04:24 crc kubenswrapper[4732]: I0131 09:04:24.027351 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7497adea-5f95-433f-b644-9aa3eae85937","Type":"ContainerStarted","Data":"6feeea57ea5b40e213efe92826dc377f85d1355d80c79473b3d790d2a355624b"} Jan 31 09:04:24 crc kubenswrapper[4732]: I0131 09:04:24.028874 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e90ec082-a189-4726-8049-2151ddf77961","Type":"ContainerStarted","Data":"68aff39b648c97e8da52037b14095371b6089843216d06b0916075106acf4b04"} Jan 31 09:04:24 crc kubenswrapper[4732]: E0131 09:04:24.030217 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6z6vm" podUID="7006b68f-caf9-44a9-a6df-26e7b594b931" Jan 31 09:04:24 crc kubenswrapper[4732]: E0131 09:04:24.031450 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gb54f" podUID="111ca852-fddd-4fb1-8d5d-331fd5921a71" Jan 31 09:04:24 crc kubenswrapper[4732]: E0131 09:04:24.031688 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vdcdv" podUID="b03cae03-72c1-4b13-8031-33381e6df48a" Jan 31 09:04:24 crc kubenswrapper[4732]: E0131 09:04:24.307061 4732 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 09:04:24 crc kubenswrapper[4732]: E0131 09:04:24.307629 4732 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p95zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zflvq_openshift-marketplace(317b5076-0f62-45e5-9db0-8d03103c990e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 09:04:24 crc kubenswrapper[4732]: E0131 09:04:24.308837 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zflvq" podUID="317b5076-0f62-45e5-9db0-8d03103c990e" Jan 31 09:04:25 crc kubenswrapper[4732]: I0131 09:04:25.035027 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e90ec082-a189-4726-8049-2151ddf77961","Type":"ContainerStarted","Data":"23b1edb2efd137d80917cfc98d36d9f6d054406b735c02e22b781cbf0e7d7c9e"} Jan 31 09:04:25 crc kubenswrapper[4732]: I0131 09:04:25.038249 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7497adea-5f95-433f-b644-9aa3eae85937","Type":"ContainerStarted","Data":"1bdec52283dba4c8385c51f2ac489c9b0b6a615f0a924e079199dd135ef936d5"} Jan 31 09:04:25 crc kubenswrapper[4732]: E0131 09:04:25.039585 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zflvq" podUID="317b5076-0f62-45e5-9db0-8d03103c990e" Jan 31 09:04:25 crc kubenswrapper[4732]: I0131 09:04:25.051914 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=7.051890452 podStartE2EDuration="7.051890452s" podCreationTimestamp="2026-01-31 09:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:04:25.050290358 +0000 UTC m=+203.356166562" watchObservedRunningTime="2026-01-31 09:04:25.051890452 +0000 UTC m=+203.357766656" Jan 31 09:04:25 crc kubenswrapper[4732]: I0131 09:04:25.075318 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=11.075294041 podStartE2EDuration="11.075294041s" podCreationTimestamp="2026-01-31 09:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:04:25.069741444 +0000 UTC m=+203.375617648" watchObservedRunningTime="2026-01-31 09:04:25.075294041 +0000 UTC m=+203.381170245" Jan 31 09:04:26 crc kubenswrapper[4732]: I0131 09:04:26.046084 4732 generic.go:334] "Generic (PLEG): container finished" podID="7497adea-5f95-433f-b644-9aa3eae85937" containerID="1bdec52283dba4c8385c51f2ac489c9b0b6a615f0a924e079199dd135ef936d5" exitCode=0 Jan 31 09:04:26 crc kubenswrapper[4732]: I0131 09:04:26.046168 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7497adea-5f95-433f-b644-9aa3eae85937","Type":"ContainerDied","Data":"1bdec52283dba4c8385c51f2ac489c9b0b6a615f0a924e079199dd135ef936d5"} Jan 31 09:04:27 crc kubenswrapper[4732]: I0131 09:04:27.478247 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:27 crc kubenswrapper[4732]: I0131 09:04:27.571988 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7497adea-5f95-433f-b644-9aa3eae85937-kubelet-dir\") pod \"7497adea-5f95-433f-b644-9aa3eae85937\" (UID: \"7497adea-5f95-433f-b644-9aa3eae85937\") " Jan 31 09:04:27 crc kubenswrapper[4732]: I0131 09:04:27.572075 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7497adea-5f95-433f-b644-9aa3eae85937-kube-api-access\") pod \"7497adea-5f95-433f-b644-9aa3eae85937\" (UID: \"7497adea-5f95-433f-b644-9aa3eae85937\") " Jan 31 09:04:27 crc kubenswrapper[4732]: I0131 09:04:27.572183 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7497adea-5f95-433f-b644-9aa3eae85937-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7497adea-5f95-433f-b644-9aa3eae85937" (UID: "7497adea-5f95-433f-b644-9aa3eae85937"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:04:27 crc kubenswrapper[4732]: I0131 09:04:27.572419 4732 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7497adea-5f95-433f-b644-9aa3eae85937-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:27 crc kubenswrapper[4732]: I0131 09:04:27.577552 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7497adea-5f95-433f-b644-9aa3eae85937-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7497adea-5f95-433f-b644-9aa3eae85937" (UID: "7497adea-5f95-433f-b644-9aa3eae85937"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:27 crc kubenswrapper[4732]: I0131 09:04:27.673500 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7497adea-5f95-433f-b644-9aa3eae85937-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:28 crc kubenswrapper[4732]: I0131 09:04:28.062302 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7497adea-5f95-433f-b644-9aa3eae85937","Type":"ContainerDied","Data":"6feeea57ea5b40e213efe92826dc377f85d1355d80c79473b3d790d2a355624b"} Jan 31 09:04:28 crc kubenswrapper[4732]: I0131 09:04:28.062745 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6feeea57ea5b40e213efe92826dc377f85d1355d80c79473b3d790d2a355624b" Jan 31 09:04:28 crc kubenswrapper[4732]: I0131 09:04:28.062415 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 09:04:33 crc kubenswrapper[4732]: I0131 09:04:33.096206 4732 generic.go:334] "Generic (PLEG): container finished" podID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" containerID="ca7d5f833a7fd7f21418e6fb1ccdd79977cfae417daaca1c00afcd2310c6650b" exitCode=0 Jan 31 09:04:33 crc kubenswrapper[4732]: I0131 09:04:33.096305 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7ngt" event={"ID":"3d6ffd83-fb99-48e0-a34a-fd365f971ef1","Type":"ContainerDied","Data":"ca7d5f833a7fd7f21418e6fb1ccdd79977cfae417daaca1c00afcd2310c6650b"} Jan 31 09:04:34 crc kubenswrapper[4732]: I0131 09:04:34.104556 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm4tc" event={"ID":"21393f97-49f1-4f27-a24c-93f88fe6596b","Type":"ContainerStarted","Data":"056f06710fefcf74e3a8f3ec56d61c726350593a141e241038ff15c0bdb571fa"} Jan 31 09:04:34 crc kubenswrapper[4732]: I0131 09:04:34.107452 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7ngt" event={"ID":"3d6ffd83-fb99-48e0-a34a-fd365f971ef1","Type":"ContainerStarted","Data":"32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc"} Jan 31 09:04:34 crc kubenswrapper[4732]: I0131 09:04:34.144492 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d7ngt" podStartSLOduration=4.107056758 podStartE2EDuration="59.144466527s" podCreationTimestamp="2026-01-31 09:03:35 +0000 UTC" firstStartedPulling="2026-01-31 09:03:38.62184422 +0000 UTC m=+156.927720424" lastFinishedPulling="2026-01-31 09:04:33.659253989 +0000 UTC m=+211.965130193" observedRunningTime="2026-01-31 09:04:34.141148185 +0000 UTC m=+212.447024399" watchObservedRunningTime="2026-01-31 09:04:34.144466527 +0000 UTC m=+212.450342731" Jan 31 09:04:35 crc kubenswrapper[4732]: I0131 09:04:35.120105 4732 generic.go:334] "Generic (PLEG): container finished" podID="21393f97-49f1-4f27-a24c-93f88fe6596b" containerID="056f06710fefcf74e3a8f3ec56d61c726350593a141e241038ff15c0bdb571fa" exitCode=0 Jan 31 09:04:35 crc kubenswrapper[4732]: I0131 09:04:35.120182 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm4tc" event={"ID":"21393f97-49f1-4f27-a24c-93f88fe6596b","Type":"ContainerDied","Data":"056f06710fefcf74e3a8f3ec56d61c726350593a141e241038ff15c0bdb571fa"} Jan 31 09:04:35 crc kubenswrapper[4732]: I0131 09:04:35.822710 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:04:35 crc kubenswrapper[4732]: I0131 09:04:35.823053 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:04:35 crc kubenswrapper[4732]: I0131 09:04:35.958170 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:04:36 crc kubenswrapper[4732]: I0131 09:04:36.127266 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm4tc" event={"ID":"21393f97-49f1-4f27-a24c-93f88fe6596b","Type":"ContainerStarted","Data":"e62da55b95d81d1296cd0d4dc32e38fc2f1cdae4524faa40e7adaa74ec4b6e42"} Jan 31 09:04:36 crc kubenswrapper[4732]: I0131 09:04:36.129955 4732 generic.go:334] "Generic (PLEG): container finished" podID="111ca852-fddd-4fb1-8d5d-331fd5921a71" containerID="3a1d4a96a5189510003839cb3107e60917aaf8cd548467e8cbeac4847c512fbd" exitCode=0 Jan 31 09:04:36 crc kubenswrapper[4732]: I0131 09:04:36.130016 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gb54f" event={"ID":"111ca852-fddd-4fb1-8d5d-331fd5921a71","Type":"ContainerDied","Data":"3a1d4a96a5189510003839cb3107e60917aaf8cd548467e8cbeac4847c512fbd"} Jan 31 09:04:36 crc kubenswrapper[4732]: I0131 09:04:36.133096 4732 generic.go:334] "Generic (PLEG): container finished" podID="60fab354-a742-4d49-88d9-22843a857ea5" containerID="dcb61f969a4ae5bab5da17004391ab9facb17348431592ecfb379299b5c6a248" exitCode=0 Jan 31 09:04:36 crc kubenswrapper[4732]: I0131 09:04:36.133130 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jzzz" event={"ID":"60fab354-a742-4d49-88d9-22843a857ea5","Type":"ContainerDied","Data":"dcb61f969a4ae5bab5da17004391ab9facb17348431592ecfb379299b5c6a248"} Jan 31 09:04:36 crc kubenswrapper[4732]: I0131 09:04:36.145570 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zm4tc" podStartSLOduration=2.195853665 podStartE2EDuration="59.145547288s" podCreationTimestamp="2026-01-31 09:03:37 +0000 UTC" firstStartedPulling="2026-01-31 09:03:38.607527659 +0000 UTC m=+156.913403863" lastFinishedPulling="2026-01-31 09:04:35.557221282 +0000 UTC m=+213.863097486" observedRunningTime="2026-01-31 09:04:36.14441682 +0000 UTC m=+214.450293044" watchObservedRunningTime="2026-01-31 09:04:36.145547288 +0000 UTC m=+214.451423492" Jan 31 09:04:37 crc kubenswrapper[4732]: I0131 09:04:37.140883 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gb54f" event={"ID":"111ca852-fddd-4fb1-8d5d-331fd5921a71","Type":"ContainerStarted","Data":"ea8e81d27f0fb9fac5964e929f2d106d5646b1bf0175c1c60abc1799d1ad0f4a"} Jan 31 09:04:37 crc kubenswrapper[4732]: I0131 09:04:37.143277 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jzzz" event={"ID":"60fab354-a742-4d49-88d9-22843a857ea5","Type":"ContainerStarted","Data":"2c08da5c8ccbc31693fa4ac757f74d0642d9c7b6beaf27556c5171f34d657f82"} Jan 31 09:04:37 crc kubenswrapper[4732]: I0131 09:04:37.145776 4732 generic.go:334] "Generic (PLEG): container finished" podID="7006b68f-caf9-44a9-a6df-26e7b594b931" containerID="57ef0bf4e5dc7d6762819c0d28bec5f496f13d673b9961467d54456931c326d3" exitCode=0 Jan 31 09:04:37 crc kubenswrapper[4732]: I0131 09:04:37.145823 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6z6vm" event={"ID":"7006b68f-caf9-44a9-a6df-26e7b594b931","Type":"ContainerDied","Data":"57ef0bf4e5dc7d6762819c0d28bec5f496f13d673b9961467d54456931c326d3"} Jan 31 09:04:37 crc kubenswrapper[4732]: I0131 09:04:37.162977 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gb54f" podStartSLOduration=3.116631031 podStartE2EDuration="1m4.162951668s" podCreationTimestamp="2026-01-31 09:03:33 +0000 UTC" firstStartedPulling="2026-01-31 09:03:35.554412596 +0000 UTC m=+153.860288800" lastFinishedPulling="2026-01-31 09:04:36.600733233 +0000 UTC m=+214.906609437" observedRunningTime="2026-01-31 09:04:37.162378478 +0000 UTC m=+215.468254692" watchObservedRunningTime="2026-01-31 09:04:37.162951668 +0000 UTC m=+215.468827872" Jan 31 09:04:37 crc kubenswrapper[4732]: I0131 09:04:37.215069 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2jzzz" podStartSLOduration=4.187002837 podStartE2EDuration="1m2.215046935s" podCreationTimestamp="2026-01-31 09:03:35 +0000 UTC" firstStartedPulling="2026-01-31 09:03:38.601698693 +0000 UTC m=+156.907574887" lastFinishedPulling="2026-01-31 09:04:36.629742781 +0000 UTC m=+214.935618985" observedRunningTime="2026-01-31 09:04:37.206964952 +0000 UTC m=+215.512841156" watchObservedRunningTime="2026-01-31 09:04:37.215046935 +0000 UTC m=+215.520923139" Jan 31 09:04:37 crc kubenswrapper[4732]: I0131 09:04:37.439508 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:04:37 crc kubenswrapper[4732]: I0131 09:04:37.439573 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:04:38 crc kubenswrapper[4732]: I0131 09:04:38.152651 4732 generic.go:334] "Generic (PLEG): container finished" podID="317b5076-0f62-45e5-9db0-8d03103c990e" containerID="a394a4a7aba70397e2142712f48785a38bdafa854de238e63e52f068af8200df" exitCode=0 Jan 31 09:04:38 crc kubenswrapper[4732]: I0131 09:04:38.152725 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zflvq" event={"ID":"317b5076-0f62-45e5-9db0-8d03103c990e","Type":"ContainerDied","Data":"a394a4a7aba70397e2142712f48785a38bdafa854de238e63e52f068af8200df"} Jan 31 09:04:38 crc kubenswrapper[4732]: I0131 09:04:38.155425 4732 generic.go:334] "Generic (PLEG): container finished" podID="b03cae03-72c1-4b13-8031-33381e6df48a" containerID="cb6fa0e2f74568ca7be1a8e4ce25991aeb89ab0933a72f514639c1cb8e80b123" exitCode=0 Jan 31 09:04:38 crc kubenswrapper[4732]: I0131 09:04:38.155468 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdcdv" event={"ID":"b03cae03-72c1-4b13-8031-33381e6df48a","Type":"ContainerDied","Data":"cb6fa0e2f74568ca7be1a8e4ce25991aeb89ab0933a72f514639c1cb8e80b123"} Jan 31 09:04:38 crc kubenswrapper[4732]: I0131 09:04:38.159155 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6z6vm" event={"ID":"7006b68f-caf9-44a9-a6df-26e7b594b931","Type":"ContainerStarted","Data":"0f914774fa96054fd035bd65f6de0e6eceaa4af864e8f50ebeec42c0731c97cc"} Jan 31 09:04:38 crc kubenswrapper[4732]: I0131 09:04:38.161155 4732 generic.go:334] "Generic (PLEG): container finished" podID="320c2656-6f30-4922-835e-8c27a82800b1" containerID="7300efb33be6a4fcedc56ac9376d4a558057b335ad7ac3766bbdc2dea4d36c86" exitCode=0 Jan 31 09:04:38 crc kubenswrapper[4732]: I0131 09:04:38.161220 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtg8l" event={"ID":"320c2656-6f30-4922-835e-8c27a82800b1","Type":"ContainerDied","Data":"7300efb33be6a4fcedc56ac9376d4a558057b335ad7ac3766bbdc2dea4d36c86"} Jan 31 09:04:38 crc kubenswrapper[4732]: I0131 09:04:38.196531 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6z6vm" podStartSLOduration=2.162773319 podStartE2EDuration="1m4.196505692s" podCreationTimestamp="2026-01-31 09:03:34 +0000 UTC" firstStartedPulling="2026-01-31 09:03:35.531329309 +0000 UTC m=+153.837205523" lastFinishedPulling="2026-01-31 09:04:37.565061692 +0000 UTC m=+215.870937896" observedRunningTime="2026-01-31 09:04:38.194927678 +0000 UTC m=+216.500803882" watchObservedRunningTime="2026-01-31 09:04:38.196505692 +0000 UTC m=+216.502381896" Jan 31 09:04:38 crc kubenswrapper[4732]: I0131 09:04:38.483248 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zm4tc" podUID="21393f97-49f1-4f27-a24c-93f88fe6596b" containerName="registry-server" probeResult="failure" output=< Jan 31 09:04:38 crc kubenswrapper[4732]: timeout: failed to connect service ":50051" within 1s Jan 31 09:04:38 crc kubenswrapper[4732]: > Jan 31 09:04:39 crc kubenswrapper[4732]: I0131 09:04:39.169195 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtg8l" event={"ID":"320c2656-6f30-4922-835e-8c27a82800b1","Type":"ContainerStarted","Data":"c9d4c116354d5c6376aaf4e43c684f7a010f6b0330ee1591e2d15233005a8f2d"} Jan 31 09:04:39 crc kubenswrapper[4732]: I0131 09:04:39.172174 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zflvq" event={"ID":"317b5076-0f62-45e5-9db0-8d03103c990e","Type":"ContainerStarted","Data":"55d6734cf3e7c05d22db17a72a71b78b94e9eb31ed346ef88787ef99a88abf99"} Jan 31 09:04:39 crc kubenswrapper[4732]: I0131 09:04:39.175476 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdcdv" event={"ID":"b03cae03-72c1-4b13-8031-33381e6df48a","Type":"ContainerStarted","Data":"3f07a87f998adaa1b0b96893986a963301b7919b28a28c13a83cb15126a9cbc6"} Jan 31 09:04:39 crc kubenswrapper[4732]: I0131 09:04:39.193837 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rtg8l" podStartSLOduration=3.137973783 podStartE2EDuration="1m6.193812163s" podCreationTimestamp="2026-01-31 09:03:33 +0000 UTC" firstStartedPulling="2026-01-31 09:03:35.535910444 +0000 UTC m=+153.841786648" lastFinishedPulling="2026-01-31 09:04:38.591748824 +0000 UTC m=+216.897625028" observedRunningTime="2026-01-31 09:04:39.189324892 +0000 UTC m=+217.495201106" watchObservedRunningTime="2026-01-31 09:04:39.193812163 +0000 UTC m=+217.499688367" Jan 31 09:04:39 crc kubenswrapper[4732]: I0131 09:04:39.210755 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vdcdv" podStartSLOduration=3.107831587 podStartE2EDuration="1m3.210735784s" podCreationTimestamp="2026-01-31 09:03:36 +0000 UTC" firstStartedPulling="2026-01-31 09:03:38.595919979 +0000 UTC m=+156.901796183" lastFinishedPulling="2026-01-31 09:04:38.698824166 +0000 UTC m=+217.004700380" observedRunningTime="2026-01-31 09:04:39.207081211 +0000 UTC m=+217.512957415" watchObservedRunningTime="2026-01-31 09:04:39.210735784 +0000 UTC m=+217.516611988" Jan 31 09:04:39 crc kubenswrapper[4732]: I0131 09:04:39.226263 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zflvq" podStartSLOduration=3.147929828 podStartE2EDuration="1m6.226245788s" podCreationTimestamp="2026-01-31 09:03:33 +0000 UTC" firstStartedPulling="2026-01-31 09:03:35.543504999 +0000 UTC m=+153.849381203" lastFinishedPulling="2026-01-31 09:04:38.621820959 +0000 UTC m=+216.927697163" observedRunningTime="2026-01-31 09:04:39.225683339 +0000 UTC m=+217.531559543" watchObservedRunningTime="2026-01-31 09:04:39.226245788 +0000 UTC m=+217.532121992" Jan 31 09:04:43 crc kubenswrapper[4732]: I0131 09:04:43.872366 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:04:43 crc kubenswrapper[4732]: I0131 09:04:43.873558 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:04:43 crc kubenswrapper[4732]: I0131 09:04:43.919017 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:04:44 crc kubenswrapper[4732]: I0131 09:04:44.023693 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:04:44 crc kubenswrapper[4732]: I0131 09:04:44.023959 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:04:44 crc kubenswrapper[4732]: I0131 09:04:44.076826 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:04:44 crc kubenswrapper[4732]: I0131 09:04:44.245140 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:04:44 crc kubenswrapper[4732]: I0131 09:04:44.245672 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:04:44 crc kubenswrapper[4732]: I0131 09:04:44.245802 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:04:44 crc kubenswrapper[4732]: I0131 09:04:44.259101 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:04:44 crc kubenswrapper[4732]: I0131 09:04:44.299774 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:04:44 crc kubenswrapper[4732]: I0131 09:04:44.458867 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:04:44 crc kubenswrapper[4732]: I0131 09:04:44.458918 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:04:44 crc kubenswrapper[4732]: I0131 09:04:44.501439 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:04:45 crc kubenswrapper[4732]: I0131 09:04:45.245903 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:04:45 crc kubenswrapper[4732]: I0131 09:04:45.249436 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:04:45 crc kubenswrapper[4732]: I0131 09:04:45.861803 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:04:46 crc kubenswrapper[4732]: I0131 09:04:46.274770 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:04:46 crc kubenswrapper[4732]: I0131 09:04:46.274833 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:04:46 crc kubenswrapper[4732]: I0131 09:04:46.310577 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:04:46 crc kubenswrapper[4732]: I0131 09:04:46.384899 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6z6vm"] Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.030904 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.031234 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.070883 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.217338 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6z6vm" podUID="7006b68f-caf9-44a9-a6df-26e7b594b931" containerName="registry-server" containerID="cri-o://0f914774fa96054fd035bd65f6de0e6eceaa4af864e8f50ebeec42c0731c97cc" gracePeriod=2 Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.252056 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.252573 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.383743 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zflvq"] Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.482400 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.499448 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.499530 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.499579 4732 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.503405 4732 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5"} pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.503601 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" containerID="cri-o://ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5" gracePeriod=600 Jan 31 09:04:47 crc kubenswrapper[4732]: I0131 09:04:47.528642 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:04:48 crc kubenswrapper[4732]: I0131 09:04:48.222320 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zflvq" podUID="317b5076-0f62-45e5-9db0-8d03103c990e" containerName="registry-server" containerID="cri-o://55d6734cf3e7c05d22db17a72a71b78b94e9eb31ed346ef88787ef99a88abf99" gracePeriod=2 Jan 31 09:04:48 crc kubenswrapper[4732]: I0131 09:04:48.783681 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jzzz"] Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.231692 4732 generic.go:334] "Generic (PLEG): container finished" podID="7d790207-d357-4b47-87bf-5b505e061820" containerID="ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5" exitCode=0 Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.231853 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerDied","Data":"ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5"} Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.232067 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerStarted","Data":"1a1af67f6e9c90030eed50fdab77c62259e76a7813864bb504390768e9501756"} Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.235639 4732 generic.go:334] "Generic (PLEG): container finished" podID="7006b68f-caf9-44a9-a6df-26e7b594b931" containerID="0f914774fa96054fd035bd65f6de0e6eceaa4af864e8f50ebeec42c0731c97cc" exitCode=0 Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.235713 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6z6vm" event={"ID":"7006b68f-caf9-44a9-a6df-26e7b594b931","Type":"ContainerDied","Data":"0f914774fa96054fd035bd65f6de0e6eceaa4af864e8f50ebeec42c0731c97cc"} Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.241035 4732 generic.go:334] "Generic (PLEG): container finished" podID="317b5076-0f62-45e5-9db0-8d03103c990e" containerID="55d6734cf3e7c05d22db17a72a71b78b94e9eb31ed346ef88787ef99a88abf99" exitCode=0 Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.241115 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zflvq" event={"ID":"317b5076-0f62-45e5-9db0-8d03103c990e","Type":"ContainerDied","Data":"55d6734cf3e7c05d22db17a72a71b78b94e9eb31ed346ef88787ef99a88abf99"} Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.241355 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2jzzz" podUID="60fab354-a742-4d49-88d9-22843a857ea5" containerName="registry-server" containerID="cri-o://2c08da5c8ccbc31693fa4ac757f74d0642d9c7b6beaf27556c5171f34d657f82" gracePeriod=2 Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.584351 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.661713 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.666367 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2f4q\" (UniqueName: \"kubernetes.io/projected/7006b68f-caf9-44a9-a6df-26e7b594b931-kube-api-access-m2f4q\") pod \"7006b68f-caf9-44a9-a6df-26e7b594b931\" (UID: \"7006b68f-caf9-44a9-a6df-26e7b594b931\") " Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.666498 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7006b68f-caf9-44a9-a6df-26e7b594b931-utilities\") pod \"7006b68f-caf9-44a9-a6df-26e7b594b931\" (UID: \"7006b68f-caf9-44a9-a6df-26e7b594b931\") " Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.666540 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7006b68f-caf9-44a9-a6df-26e7b594b931-catalog-content\") pod \"7006b68f-caf9-44a9-a6df-26e7b594b931\" (UID: \"7006b68f-caf9-44a9-a6df-26e7b594b931\") " Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.671709 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7006b68f-caf9-44a9-a6df-26e7b594b931-kube-api-access-m2f4q" (OuterVolumeSpecName: "kube-api-access-m2f4q") pod "7006b68f-caf9-44a9-a6df-26e7b594b931" (UID: "7006b68f-caf9-44a9-a6df-26e7b594b931"). InnerVolumeSpecName "kube-api-access-m2f4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.683650 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7006b68f-caf9-44a9-a6df-26e7b594b931-utilities" (OuterVolumeSpecName: "utilities") pod "7006b68f-caf9-44a9-a6df-26e7b594b931" (UID: "7006b68f-caf9-44a9-a6df-26e7b594b931"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.719121 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7006b68f-caf9-44a9-a6df-26e7b594b931-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7006b68f-caf9-44a9-a6df-26e7b594b931" (UID: "7006b68f-caf9-44a9-a6df-26e7b594b931"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.768494 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm96d\" (UniqueName: \"kubernetes.io/projected/60fab354-a742-4d49-88d9-22843a857ea5-kube-api-access-nm96d\") pod \"60fab354-a742-4d49-88d9-22843a857ea5\" (UID: \"60fab354-a742-4d49-88d9-22843a857ea5\") " Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.768569 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60fab354-a742-4d49-88d9-22843a857ea5-catalog-content\") pod \"60fab354-a742-4d49-88d9-22843a857ea5\" (UID: \"60fab354-a742-4d49-88d9-22843a857ea5\") " Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.768693 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60fab354-a742-4d49-88d9-22843a857ea5-utilities\") pod \"60fab354-a742-4d49-88d9-22843a857ea5\" (UID: \"60fab354-a742-4d49-88d9-22843a857ea5\") " Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.770877 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60fab354-a742-4d49-88d9-22843a857ea5-utilities" (OuterVolumeSpecName: "utilities") pod "60fab354-a742-4d49-88d9-22843a857ea5" (UID: "60fab354-a742-4d49-88d9-22843a857ea5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.771360 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7006b68f-caf9-44a9-a6df-26e7b594b931-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.771375 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7006b68f-caf9-44a9-a6df-26e7b594b931-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.771391 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2f4q\" (UniqueName: \"kubernetes.io/projected/7006b68f-caf9-44a9-a6df-26e7b594b931-kube-api-access-m2f4q\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.771409 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60fab354-a742-4d49-88d9-22843a857ea5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.775477 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60fab354-a742-4d49-88d9-22843a857ea5-kube-api-access-nm96d" (OuterVolumeSpecName: "kube-api-access-nm96d") pod "60fab354-a742-4d49-88d9-22843a857ea5" (UID: "60fab354-a742-4d49-88d9-22843a857ea5"). InnerVolumeSpecName "kube-api-access-nm96d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.806431 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60fab354-a742-4d49-88d9-22843a857ea5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60fab354-a742-4d49-88d9-22843a857ea5" (UID: "60fab354-a742-4d49-88d9-22843a857ea5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.873113 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm96d\" (UniqueName: \"kubernetes.io/projected/60fab354-a742-4d49-88d9-22843a857ea5-kube-api-access-nm96d\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.873156 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60fab354-a742-4d49-88d9-22843a857ea5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:49 crc kubenswrapper[4732]: I0131 09:04:49.972678 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.076219 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317b5076-0f62-45e5-9db0-8d03103c990e-utilities\") pod \"317b5076-0f62-45e5-9db0-8d03103c990e\" (UID: \"317b5076-0f62-45e5-9db0-8d03103c990e\") " Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.076331 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p95zf\" (UniqueName: \"kubernetes.io/projected/317b5076-0f62-45e5-9db0-8d03103c990e-kube-api-access-p95zf\") pod \"317b5076-0f62-45e5-9db0-8d03103c990e\" (UID: \"317b5076-0f62-45e5-9db0-8d03103c990e\") " Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.076419 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317b5076-0f62-45e5-9db0-8d03103c990e-catalog-content\") pod \"317b5076-0f62-45e5-9db0-8d03103c990e\" (UID: \"317b5076-0f62-45e5-9db0-8d03103c990e\") " Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.077675 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/317b5076-0f62-45e5-9db0-8d03103c990e-utilities" (OuterVolumeSpecName: "utilities") pod "317b5076-0f62-45e5-9db0-8d03103c990e" (UID: "317b5076-0f62-45e5-9db0-8d03103c990e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.080250 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317b5076-0f62-45e5-9db0-8d03103c990e-kube-api-access-p95zf" (OuterVolumeSpecName: "kube-api-access-p95zf") pod "317b5076-0f62-45e5-9db0-8d03103c990e" (UID: "317b5076-0f62-45e5-9db0-8d03103c990e"). InnerVolumeSpecName "kube-api-access-p95zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.134385 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/317b5076-0f62-45e5-9db0-8d03103c990e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "317b5076-0f62-45e5-9db0-8d03103c990e" (UID: "317b5076-0f62-45e5-9db0-8d03103c990e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.177607 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p95zf\" (UniqueName: \"kubernetes.io/projected/317b5076-0f62-45e5-9db0-8d03103c990e-kube-api-access-p95zf\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.177854 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317b5076-0f62-45e5-9db0-8d03103c990e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.177950 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317b5076-0f62-45e5-9db0-8d03103c990e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.249290 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zflvq" event={"ID":"317b5076-0f62-45e5-9db0-8d03103c990e","Type":"ContainerDied","Data":"e3e3c2c80d6bcc6eb2c4ab51462ea9a6f6a150eb54a71c100214dd3061f85429"} Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.249354 4732 scope.go:117] "RemoveContainer" containerID="55d6734cf3e7c05d22db17a72a71b78b94e9eb31ed346ef88787ef99a88abf99" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.249353 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zflvq" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.252525 4732 generic.go:334] "Generic (PLEG): container finished" podID="60fab354-a742-4d49-88d9-22843a857ea5" containerID="2c08da5c8ccbc31693fa4ac757f74d0642d9c7b6beaf27556c5171f34d657f82" exitCode=0 Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.252604 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jzzz" event={"ID":"60fab354-a742-4d49-88d9-22843a857ea5","Type":"ContainerDied","Data":"2c08da5c8ccbc31693fa4ac757f74d0642d9c7b6beaf27556c5171f34d657f82"} Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.252638 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jzzz" event={"ID":"60fab354-a742-4d49-88d9-22843a857ea5","Type":"ContainerDied","Data":"9ea503aa2d078ea5c77d132dad08c175b5d8c3c762d663e871a3d782440f1d6e"} Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.252753 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jzzz" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.257476 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6z6vm" event={"ID":"7006b68f-caf9-44a9-a6df-26e7b594b931","Type":"ContainerDied","Data":"57ca1e232ee4486e6e4d54b1df45d6e963d8c6e1b497c54651b39471c8475a9c"} Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.257569 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6z6vm" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.274771 4732 scope.go:117] "RemoveContainer" containerID="a394a4a7aba70397e2142712f48785a38bdafa854de238e63e52f068af8200df" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.286368 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jzzz"] Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.288656 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jzzz"] Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.299155 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zflvq"] Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.302690 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zflvq"] Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.311715 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6z6vm"] Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.314902 4732 scope.go:117] "RemoveContainer" containerID="d955c74da8ffd285d20400dc34dfd51736c439bd9e7f63e99e5270665cbbadb8" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.316781 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6z6vm"] Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.327571 4732 scope.go:117] "RemoveContainer" containerID="2c08da5c8ccbc31693fa4ac757f74d0642d9c7b6beaf27556c5171f34d657f82" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.343881 4732 scope.go:117] "RemoveContainer" containerID="dcb61f969a4ae5bab5da17004391ab9facb17348431592ecfb379299b5c6a248" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.358764 4732 scope.go:117] "RemoveContainer" containerID="5530625f265d27e331890e3d6915df6f40e55b5333079f815cda7a875fb373a3" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.380692 4732 scope.go:117] "RemoveContainer" containerID="2c08da5c8ccbc31693fa4ac757f74d0642d9c7b6beaf27556c5171f34d657f82" Jan 31 09:04:50 crc kubenswrapper[4732]: E0131 09:04:50.381283 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c08da5c8ccbc31693fa4ac757f74d0642d9c7b6beaf27556c5171f34d657f82\": container with ID starting with 2c08da5c8ccbc31693fa4ac757f74d0642d9c7b6beaf27556c5171f34d657f82 not found: ID does not exist" containerID="2c08da5c8ccbc31693fa4ac757f74d0642d9c7b6beaf27556c5171f34d657f82" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.381319 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c08da5c8ccbc31693fa4ac757f74d0642d9c7b6beaf27556c5171f34d657f82"} err="failed to get container status \"2c08da5c8ccbc31693fa4ac757f74d0642d9c7b6beaf27556c5171f34d657f82\": rpc error: code = NotFound desc = could not find container \"2c08da5c8ccbc31693fa4ac757f74d0642d9c7b6beaf27556c5171f34d657f82\": container with ID starting with 2c08da5c8ccbc31693fa4ac757f74d0642d9c7b6beaf27556c5171f34d657f82 not found: ID does not exist" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.381372 4732 scope.go:117] "RemoveContainer" containerID="dcb61f969a4ae5bab5da17004391ab9facb17348431592ecfb379299b5c6a248" Jan 31 09:04:50 crc kubenswrapper[4732]: E0131 09:04:50.381998 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcb61f969a4ae5bab5da17004391ab9facb17348431592ecfb379299b5c6a248\": container with ID starting with dcb61f969a4ae5bab5da17004391ab9facb17348431592ecfb379299b5c6a248 not found: ID does not exist" containerID="dcb61f969a4ae5bab5da17004391ab9facb17348431592ecfb379299b5c6a248" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.382042 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcb61f969a4ae5bab5da17004391ab9facb17348431592ecfb379299b5c6a248"} err="failed to get container status \"dcb61f969a4ae5bab5da17004391ab9facb17348431592ecfb379299b5c6a248\": rpc error: code = NotFound desc = could not find container \"dcb61f969a4ae5bab5da17004391ab9facb17348431592ecfb379299b5c6a248\": container with ID starting with dcb61f969a4ae5bab5da17004391ab9facb17348431592ecfb379299b5c6a248 not found: ID does not exist" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.382079 4732 scope.go:117] "RemoveContainer" containerID="5530625f265d27e331890e3d6915df6f40e55b5333079f815cda7a875fb373a3" Jan 31 09:04:50 crc kubenswrapper[4732]: E0131 09:04:50.382487 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5530625f265d27e331890e3d6915df6f40e55b5333079f815cda7a875fb373a3\": container with ID starting with 5530625f265d27e331890e3d6915df6f40e55b5333079f815cda7a875fb373a3 not found: ID does not exist" containerID="5530625f265d27e331890e3d6915df6f40e55b5333079f815cda7a875fb373a3" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.382517 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5530625f265d27e331890e3d6915df6f40e55b5333079f815cda7a875fb373a3"} err="failed to get container status \"5530625f265d27e331890e3d6915df6f40e55b5333079f815cda7a875fb373a3\": rpc error: code = NotFound desc = could not find container \"5530625f265d27e331890e3d6915df6f40e55b5333079f815cda7a875fb373a3\": container with ID starting with 5530625f265d27e331890e3d6915df6f40e55b5333079f815cda7a875fb373a3 not found: ID does not exist" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.382538 4732 scope.go:117] "RemoveContainer" containerID="0f914774fa96054fd035bd65f6de0e6eceaa4af864e8f50ebeec42c0731c97cc" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.405072 4732 scope.go:117] "RemoveContainer" containerID="57ef0bf4e5dc7d6762819c0d28bec5f496f13d673b9961467d54456931c326d3" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.420888 4732 scope.go:117] "RemoveContainer" containerID="b38eca76de0c7fcb760be9dbb97b202ed5f6069cdb395b15ae3017074d198d5e" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.548655 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="317b5076-0f62-45e5-9db0-8d03103c990e" path="/var/lib/kubelet/pods/317b5076-0f62-45e5-9db0-8d03103c990e/volumes" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.549368 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60fab354-a742-4d49-88d9-22843a857ea5" path="/var/lib/kubelet/pods/60fab354-a742-4d49-88d9-22843a857ea5/volumes" Jan 31 09:04:50 crc kubenswrapper[4732]: I0131 09:04:50.550012 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7006b68f-caf9-44a9-a6df-26e7b594b931" path="/var/lib/kubelet/pods/7006b68f-caf9-44a9-a6df-26e7b594b931/volumes" Jan 31 09:04:51 crc kubenswrapper[4732]: I0131 09:04:51.191064 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zm4tc"] Jan 31 09:04:51 crc kubenswrapper[4732]: I0131 09:04:51.191982 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zm4tc" podUID="21393f97-49f1-4f27-a24c-93f88fe6596b" containerName="registry-server" containerID="cri-o://e62da55b95d81d1296cd0d4dc32e38fc2f1cdae4524faa40e7adaa74ec4b6e42" gracePeriod=2 Jan 31 09:04:51 crc kubenswrapper[4732]: I0131 09:04:51.588527 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:04:51 crc kubenswrapper[4732]: I0131 09:04:51.697839 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21393f97-49f1-4f27-a24c-93f88fe6596b-utilities\") pod \"21393f97-49f1-4f27-a24c-93f88fe6596b\" (UID: \"21393f97-49f1-4f27-a24c-93f88fe6596b\") " Jan 31 09:04:51 crc kubenswrapper[4732]: I0131 09:04:51.697884 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21393f97-49f1-4f27-a24c-93f88fe6596b-catalog-content\") pod \"21393f97-49f1-4f27-a24c-93f88fe6596b\" (UID: \"21393f97-49f1-4f27-a24c-93f88fe6596b\") " Jan 31 09:04:51 crc kubenswrapper[4732]: I0131 09:04:51.697938 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzkj6\" (UniqueName: \"kubernetes.io/projected/21393f97-49f1-4f27-a24c-93f88fe6596b-kube-api-access-rzkj6\") pod \"21393f97-49f1-4f27-a24c-93f88fe6596b\" (UID: \"21393f97-49f1-4f27-a24c-93f88fe6596b\") " Jan 31 09:04:51 crc kubenswrapper[4732]: I0131 09:04:51.699110 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21393f97-49f1-4f27-a24c-93f88fe6596b-utilities" (OuterVolumeSpecName: "utilities") pod "21393f97-49f1-4f27-a24c-93f88fe6596b" (UID: "21393f97-49f1-4f27-a24c-93f88fe6596b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4732]: I0131 09:04:51.703289 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21393f97-49f1-4f27-a24c-93f88fe6596b-kube-api-access-rzkj6" (OuterVolumeSpecName: "kube-api-access-rzkj6") pod "21393f97-49f1-4f27-a24c-93f88fe6596b" (UID: "21393f97-49f1-4f27-a24c-93f88fe6596b"). InnerVolumeSpecName "kube-api-access-rzkj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4732]: I0131 09:04:51.799341 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21393f97-49f1-4f27-a24c-93f88fe6596b-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4732]: I0131 09:04:51.799385 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzkj6\" (UniqueName: \"kubernetes.io/projected/21393f97-49f1-4f27-a24c-93f88fe6596b-kube-api-access-rzkj6\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:51 crc kubenswrapper[4732]: I0131 09:04:51.827616 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21393f97-49f1-4f27-a24c-93f88fe6596b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21393f97-49f1-4f27-a24c-93f88fe6596b" (UID: "21393f97-49f1-4f27-a24c-93f88fe6596b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:51 crc kubenswrapper[4732]: I0131 09:04:51.900191 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21393f97-49f1-4f27-a24c-93f88fe6596b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.272404 4732 generic.go:334] "Generic (PLEG): container finished" podID="21393f97-49f1-4f27-a24c-93f88fe6596b" containerID="e62da55b95d81d1296cd0d4dc32e38fc2f1cdae4524faa40e7adaa74ec4b6e42" exitCode=0 Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.272484 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm4tc" Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.272519 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm4tc" event={"ID":"21393f97-49f1-4f27-a24c-93f88fe6596b","Type":"ContainerDied","Data":"e62da55b95d81d1296cd0d4dc32e38fc2f1cdae4524faa40e7adaa74ec4b6e42"} Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.272914 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm4tc" event={"ID":"21393f97-49f1-4f27-a24c-93f88fe6596b","Type":"ContainerDied","Data":"72ec2ce1dae672abaa2d0accf204fcab615644a223c21af740bfdd14a5e3a998"} Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.272934 4732 scope.go:117] "RemoveContainer" containerID="e62da55b95d81d1296cd0d4dc32e38fc2f1cdae4524faa40e7adaa74ec4b6e42" Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.289060 4732 scope.go:117] "RemoveContainer" containerID="056f06710fefcf74e3a8f3ec56d61c726350593a141e241038ff15c0bdb571fa" Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.301455 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zm4tc"] Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.305009 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zm4tc"] Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.329165 4732 scope.go:117] "RemoveContainer" containerID="b61b2dcac543f0ef563e0d943499efa08ba5e1c076661ef3c5b44487b1cd0867" Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.344726 4732 scope.go:117] "RemoveContainer" containerID="e62da55b95d81d1296cd0d4dc32e38fc2f1cdae4524faa40e7adaa74ec4b6e42" Jan 31 09:04:52 crc kubenswrapper[4732]: E0131 09:04:52.345213 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e62da55b95d81d1296cd0d4dc32e38fc2f1cdae4524faa40e7adaa74ec4b6e42\": container with ID starting with e62da55b95d81d1296cd0d4dc32e38fc2f1cdae4524faa40e7adaa74ec4b6e42 not found: ID does not exist" containerID="e62da55b95d81d1296cd0d4dc32e38fc2f1cdae4524faa40e7adaa74ec4b6e42" Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.345287 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e62da55b95d81d1296cd0d4dc32e38fc2f1cdae4524faa40e7adaa74ec4b6e42"} err="failed to get container status \"e62da55b95d81d1296cd0d4dc32e38fc2f1cdae4524faa40e7adaa74ec4b6e42\": rpc error: code = NotFound desc = could not find container \"e62da55b95d81d1296cd0d4dc32e38fc2f1cdae4524faa40e7adaa74ec4b6e42\": container with ID starting with e62da55b95d81d1296cd0d4dc32e38fc2f1cdae4524faa40e7adaa74ec4b6e42 not found: ID does not exist" Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.345318 4732 scope.go:117] "RemoveContainer" containerID="056f06710fefcf74e3a8f3ec56d61c726350593a141e241038ff15c0bdb571fa" Jan 31 09:04:52 crc kubenswrapper[4732]: E0131 09:04:52.345691 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"056f06710fefcf74e3a8f3ec56d61c726350593a141e241038ff15c0bdb571fa\": container with ID starting with 056f06710fefcf74e3a8f3ec56d61c726350593a141e241038ff15c0bdb571fa not found: ID does not exist" containerID="056f06710fefcf74e3a8f3ec56d61c726350593a141e241038ff15c0bdb571fa" Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.345722 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"056f06710fefcf74e3a8f3ec56d61c726350593a141e241038ff15c0bdb571fa"} err="failed to get container status \"056f06710fefcf74e3a8f3ec56d61c726350593a141e241038ff15c0bdb571fa\": rpc error: code = NotFound desc = could not find container \"056f06710fefcf74e3a8f3ec56d61c726350593a141e241038ff15c0bdb571fa\": container with ID starting with 056f06710fefcf74e3a8f3ec56d61c726350593a141e241038ff15c0bdb571fa not found: ID does not exist" Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.345744 4732 scope.go:117] "RemoveContainer" containerID="b61b2dcac543f0ef563e0d943499efa08ba5e1c076661ef3c5b44487b1cd0867" Jan 31 09:04:52 crc kubenswrapper[4732]: E0131 09:04:52.346032 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b61b2dcac543f0ef563e0d943499efa08ba5e1c076661ef3c5b44487b1cd0867\": container with ID starting with b61b2dcac543f0ef563e0d943499efa08ba5e1c076661ef3c5b44487b1cd0867 not found: ID does not exist" containerID="b61b2dcac543f0ef563e0d943499efa08ba5e1c076661ef3c5b44487b1cd0867" Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.346064 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b61b2dcac543f0ef563e0d943499efa08ba5e1c076661ef3c5b44487b1cd0867"} err="failed to get container status \"b61b2dcac543f0ef563e0d943499efa08ba5e1c076661ef3c5b44487b1cd0867\": rpc error: code = NotFound desc = could not find container \"b61b2dcac543f0ef563e0d943499efa08ba5e1c076661ef3c5b44487b1cd0867\": container with ID starting with b61b2dcac543f0ef563e0d943499efa08ba5e1c076661ef3c5b44487b1cd0867 not found: ID does not exist" Jan 31 09:04:52 crc kubenswrapper[4732]: I0131 09:04:52.558191 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21393f97-49f1-4f27-a24c-93f88fe6596b" path="/var/lib/kubelet/pods/21393f97-49f1-4f27-a24c-93f88fe6596b/volumes" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.098002 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c8t6l"] Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.445864 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gb54f"] Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.446259 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gb54f" podUID="111ca852-fddd-4fb1-8d5d-331fd5921a71" containerName="registry-server" containerID="cri-o://ea8e81d27f0fb9fac5964e929f2d106d5646b1bf0175c1c60abc1799d1ad0f4a" gracePeriod=30 Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.463278 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rtg8l"] Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.463534 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rtg8l" podUID="320c2656-6f30-4922-835e-8c27a82800b1" containerName="registry-server" containerID="cri-o://c9d4c116354d5c6376aaf4e43c684f7a010f6b0330ee1591e2d15233005a8f2d" gracePeriod=30 Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.479757 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ljds4"] Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.480041 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" podUID="f4d0ed50-aa9b-4a62-b340-882ddf73f008" containerName="marketplace-operator" containerID="cri-o://393baf9619f9cf20095d6f558e1acd40e97479a63b2c71e28acd340a9b8c0183" gracePeriod=30 Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.491081 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7ngt"] Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.491375 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d7ngt" podUID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" containerName="registry-server" containerID="cri-o://32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc" gracePeriod=30 Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.497863 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vdcdv"] Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.498114 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vdcdv" podUID="b03cae03-72c1-4b13-8031-33381e6df48a" containerName="registry-server" containerID="cri-o://3f07a87f998adaa1b0b96893986a963301b7919b28a28c13a83cb15126a9cbc6" gracePeriod=30 Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.501988 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8bchs"] Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.502275 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7006b68f-caf9-44a9-a6df-26e7b594b931" containerName="extract-utilities" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502297 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="7006b68f-caf9-44a9-a6df-26e7b594b931" containerName="extract-utilities" Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.502311 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21393f97-49f1-4f27-a24c-93f88fe6596b" containerName="extract-utilities" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502320 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="21393f97-49f1-4f27-a24c-93f88fe6596b" containerName="extract-utilities" Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.502331 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21393f97-49f1-4f27-a24c-93f88fe6596b" containerName="extract-content" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502340 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="21393f97-49f1-4f27-a24c-93f88fe6596b" containerName="extract-content" Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.502349 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317b5076-0f62-45e5-9db0-8d03103c990e" containerName="extract-utilities" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502358 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="317b5076-0f62-45e5-9db0-8d03103c990e" containerName="extract-utilities" Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.502372 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fab354-a742-4d49-88d9-22843a857ea5" containerName="extract-utilities" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502381 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fab354-a742-4d49-88d9-22843a857ea5" containerName="extract-utilities" Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.502396 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7497adea-5f95-433f-b644-9aa3eae85937" containerName="pruner" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502404 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="7497adea-5f95-433f-b644-9aa3eae85937" containerName="pruner" Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.502416 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7006b68f-caf9-44a9-a6df-26e7b594b931" containerName="registry-server" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502423 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="7006b68f-caf9-44a9-a6df-26e7b594b931" containerName="registry-server" Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.502434 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7006b68f-caf9-44a9-a6df-26e7b594b931" containerName="extract-content" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502442 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="7006b68f-caf9-44a9-a6df-26e7b594b931" containerName="extract-content" Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.502451 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317b5076-0f62-45e5-9db0-8d03103c990e" containerName="extract-content" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502460 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="317b5076-0f62-45e5-9db0-8d03103c990e" containerName="extract-content" Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.502477 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317b5076-0f62-45e5-9db0-8d03103c990e" containerName="registry-server" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502484 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="317b5076-0f62-45e5-9db0-8d03103c990e" containerName="registry-server" Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.502494 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21393f97-49f1-4f27-a24c-93f88fe6596b" containerName="registry-server" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502501 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="21393f97-49f1-4f27-a24c-93f88fe6596b" containerName="registry-server" Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.502514 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fab354-a742-4d49-88d9-22843a857ea5" containerName="extract-content" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502522 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fab354-a742-4d49-88d9-22843a857ea5" containerName="extract-content" Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.502531 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fab354-a742-4d49-88d9-22843a857ea5" containerName="registry-server" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502538 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fab354-a742-4d49-88d9-22843a857ea5" containerName="registry-server" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502683 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="7497adea-5f95-433f-b644-9aa3eae85937" containerName="pruner" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502701 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="7006b68f-caf9-44a9-a6df-26e7b594b931" containerName="registry-server" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502714 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="60fab354-a742-4d49-88d9-22843a857ea5" containerName="registry-server" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502726 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="317b5076-0f62-45e5-9db0-8d03103c990e" containerName="registry-server" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.502739 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="21393f97-49f1-4f27-a24c-93f88fe6596b" containerName="registry-server" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.503306 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.503433 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8bchs"] Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.646398 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhl9h\" (UniqueName: \"kubernetes.io/projected/d0dbfc52-f4e9-462a-a253-2bb950c04e7b-kube-api-access-vhl9h\") pod \"marketplace-operator-79b997595-8bchs\" (UID: \"d0dbfc52-f4e9-462a-a253-2bb950c04e7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.646765 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d0dbfc52-f4e9-462a-a253-2bb950c04e7b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8bchs\" (UID: \"d0dbfc52-f4e9-462a-a253-2bb950c04e7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.646806 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0dbfc52-f4e9-462a-a253-2bb950c04e7b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8bchs\" (UID: \"d0dbfc52-f4e9-462a-a253-2bb950c04e7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.747573 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhl9h\" (UniqueName: \"kubernetes.io/projected/d0dbfc52-f4e9-462a-a253-2bb950c04e7b-kube-api-access-vhl9h\") pod \"marketplace-operator-79b997595-8bchs\" (UID: \"d0dbfc52-f4e9-462a-a253-2bb950c04e7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.747623 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d0dbfc52-f4e9-462a-a253-2bb950c04e7b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8bchs\" (UID: \"d0dbfc52-f4e9-462a-a253-2bb950c04e7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.747654 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0dbfc52-f4e9-462a-a253-2bb950c04e7b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8bchs\" (UID: \"d0dbfc52-f4e9-462a-a253-2bb950c04e7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.749364 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0dbfc52-f4e9-462a-a253-2bb950c04e7b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8bchs\" (UID: \"d0dbfc52-f4e9-462a-a253-2bb950c04e7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.756455 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d0dbfc52-f4e9-462a-a253-2bb950c04e7b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8bchs\" (UID: \"d0dbfc52-f4e9-462a-a253-2bb950c04e7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.766778 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhl9h\" (UniqueName: \"kubernetes.io/projected/d0dbfc52-f4e9-462a-a253-2bb950c04e7b-kube-api-access-vhl9h\") pod \"marketplace-operator-79b997595-8bchs\" (UID: \"d0dbfc52-f4e9-462a-a253-2bb950c04e7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.824789 4732 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc is running failed: container process not found" containerID="32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.826125 4732 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc is running failed: container process not found" containerID="32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.826872 4732 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc is running failed: container process not found" containerID="32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 09:04:55 crc kubenswrapper[4732]: E0131 09:04:55.827071 4732 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-d7ngt" podUID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" containerName="registry-server" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.894441 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.922042 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:04:55 crc kubenswrapper[4732]: I0131 09:04:55.936783 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.027022 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.031582 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.050478 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-654wm\" (UniqueName: \"kubernetes.io/projected/320c2656-6f30-4922-835e-8c27a82800b1-kube-api-access-654wm\") pod \"320c2656-6f30-4922-835e-8c27a82800b1\" (UID: \"320c2656-6f30-4922-835e-8c27a82800b1\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.050974 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320c2656-6f30-4922-835e-8c27a82800b1-utilities\") pod \"320c2656-6f30-4922-835e-8c27a82800b1\" (UID: \"320c2656-6f30-4922-835e-8c27a82800b1\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.051020 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27rqc\" (UniqueName: \"kubernetes.io/projected/111ca852-fddd-4fb1-8d5d-331fd5921a71-kube-api-access-27rqc\") pod \"111ca852-fddd-4fb1-8d5d-331fd5921a71\" (UID: \"111ca852-fddd-4fb1-8d5d-331fd5921a71\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.051136 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/111ca852-fddd-4fb1-8d5d-331fd5921a71-utilities\") pod \"111ca852-fddd-4fb1-8d5d-331fd5921a71\" (UID: \"111ca852-fddd-4fb1-8d5d-331fd5921a71\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.051299 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/111ca852-fddd-4fb1-8d5d-331fd5921a71-catalog-content\") pod \"111ca852-fddd-4fb1-8d5d-331fd5921a71\" (UID: \"111ca852-fddd-4fb1-8d5d-331fd5921a71\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.051339 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320c2656-6f30-4922-835e-8c27a82800b1-catalog-content\") pod \"320c2656-6f30-4922-835e-8c27a82800b1\" (UID: \"320c2656-6f30-4922-835e-8c27a82800b1\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.058142 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/111ca852-fddd-4fb1-8d5d-331fd5921a71-utilities" (OuterVolumeSpecName: "utilities") pod "111ca852-fddd-4fb1-8d5d-331fd5921a71" (UID: "111ca852-fddd-4fb1-8d5d-331fd5921a71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.063427 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320c2656-6f30-4922-835e-8c27a82800b1-utilities" (OuterVolumeSpecName: "utilities") pod "320c2656-6f30-4922-835e-8c27a82800b1" (UID: "320c2656-6f30-4922-835e-8c27a82800b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.068049 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.071324 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/320c2656-6f30-4922-835e-8c27a82800b1-kube-api-access-654wm" (OuterVolumeSpecName: "kube-api-access-654wm") pod "320c2656-6f30-4922-835e-8c27a82800b1" (UID: "320c2656-6f30-4922-835e-8c27a82800b1"). InnerVolumeSpecName "kube-api-access-654wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.081856 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/111ca852-fddd-4fb1-8d5d-331fd5921a71-kube-api-access-27rqc" (OuterVolumeSpecName: "kube-api-access-27rqc") pod "111ca852-fddd-4fb1-8d5d-331fd5921a71" (UID: "111ca852-fddd-4fb1-8d5d-331fd5921a71"). InnerVolumeSpecName "kube-api-access-27rqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.138902 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/111ca852-fddd-4fb1-8d5d-331fd5921a71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "111ca852-fddd-4fb1-8d5d-331fd5921a71" (UID: "111ca852-fddd-4fb1-8d5d-331fd5921a71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155280 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-utilities\") pod \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\" (UID: \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155425 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4d0ed50-aa9b-4a62-b340-882ddf73f008-marketplace-trusted-ca\") pod \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\" (UID: \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155450 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67nvg\" (UniqueName: \"kubernetes.io/projected/f4d0ed50-aa9b-4a62-b340-882ddf73f008-kube-api-access-67nvg\") pod \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\" (UID: \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155467 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dzc9\" (UniqueName: \"kubernetes.io/projected/b03cae03-72c1-4b13-8031-33381e6df48a-kube-api-access-2dzc9\") pod \"b03cae03-72c1-4b13-8031-33381e6df48a\" (UID: \"b03cae03-72c1-4b13-8031-33381e6df48a\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155559 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4d0ed50-aa9b-4a62-b340-882ddf73f008-marketplace-operator-metrics\") pod \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\" (UID: \"f4d0ed50-aa9b-4a62-b340-882ddf73f008\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155585 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03cae03-72c1-4b13-8031-33381e6df48a-utilities\") pod \"b03cae03-72c1-4b13-8031-33381e6df48a\" (UID: \"b03cae03-72c1-4b13-8031-33381e6df48a\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155602 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03cae03-72c1-4b13-8031-33381e6df48a-catalog-content\") pod \"b03cae03-72c1-4b13-8031-33381e6df48a\" (UID: \"b03cae03-72c1-4b13-8031-33381e6df48a\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155651 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88rk8\" (UniqueName: \"kubernetes.io/projected/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-kube-api-access-88rk8\") pod \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\" (UID: \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155684 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-catalog-content\") pod \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\" (UID: \"3d6ffd83-fb99-48e0-a34a-fd365f971ef1\") " Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155885 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/111ca852-fddd-4fb1-8d5d-331fd5921a71-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155898 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/111ca852-fddd-4fb1-8d5d-331fd5921a71-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155909 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-654wm\" (UniqueName: \"kubernetes.io/projected/320c2656-6f30-4922-835e-8c27a82800b1-kube-api-access-654wm\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155918 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320c2656-6f30-4922-835e-8c27a82800b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.155926 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27rqc\" (UniqueName: \"kubernetes.io/projected/111ca852-fddd-4fb1-8d5d-331fd5921a71-kube-api-access-27rqc\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.156226 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-utilities" (OuterVolumeSpecName: "utilities") pod "3d6ffd83-fb99-48e0-a34a-fd365f971ef1" (UID: "3d6ffd83-fb99-48e0-a34a-fd365f971ef1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.156315 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d0ed50-aa9b-4a62-b340-882ddf73f008-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f4d0ed50-aa9b-4a62-b340-882ddf73f008" (UID: "f4d0ed50-aa9b-4a62-b340-882ddf73f008"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.156924 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b03cae03-72c1-4b13-8031-33381e6df48a-utilities" (OuterVolumeSpecName: "utilities") pod "b03cae03-72c1-4b13-8031-33381e6df48a" (UID: "b03cae03-72c1-4b13-8031-33381e6df48a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.159765 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03cae03-72c1-4b13-8031-33381e6df48a-kube-api-access-2dzc9" (OuterVolumeSpecName: "kube-api-access-2dzc9") pod "b03cae03-72c1-4b13-8031-33381e6df48a" (UID: "b03cae03-72c1-4b13-8031-33381e6df48a"). InnerVolumeSpecName "kube-api-access-2dzc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.161544 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-kube-api-access-88rk8" (OuterVolumeSpecName: "kube-api-access-88rk8") pod "3d6ffd83-fb99-48e0-a34a-fd365f971ef1" (UID: "3d6ffd83-fb99-48e0-a34a-fd365f971ef1"). InnerVolumeSpecName "kube-api-access-88rk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.161968 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d0ed50-aa9b-4a62-b340-882ddf73f008-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f4d0ed50-aa9b-4a62-b340-882ddf73f008" (UID: "f4d0ed50-aa9b-4a62-b340-882ddf73f008"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.165415 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d0ed50-aa9b-4a62-b340-882ddf73f008-kube-api-access-67nvg" (OuterVolumeSpecName: "kube-api-access-67nvg") pod "f4d0ed50-aa9b-4a62-b340-882ddf73f008" (UID: "f4d0ed50-aa9b-4a62-b340-882ddf73f008"). InnerVolumeSpecName "kube-api-access-67nvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.186089 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d6ffd83-fb99-48e0-a34a-fd365f971ef1" (UID: "3d6ffd83-fb99-48e0-a34a-fd365f971ef1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.258051 4732 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4d0ed50-aa9b-4a62-b340-882ddf73f008-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.258103 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03cae03-72c1-4b13-8031-33381e6df48a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.258116 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.258127 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88rk8\" (UniqueName: \"kubernetes.io/projected/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-kube-api-access-88rk8\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.258136 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6ffd83-fb99-48e0-a34a-fd365f971ef1-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.258150 4732 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4d0ed50-aa9b-4a62-b340-882ddf73f008-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.258164 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67nvg\" (UniqueName: \"kubernetes.io/projected/f4d0ed50-aa9b-4a62-b340-882ddf73f008-kube-api-access-67nvg\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.258176 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dzc9\" (UniqueName: \"kubernetes.io/projected/b03cae03-72c1-4b13-8031-33381e6df48a-kube-api-access-2dzc9\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.307206 4732 generic.go:334] "Generic (PLEG): container finished" podID="111ca852-fddd-4fb1-8d5d-331fd5921a71" containerID="ea8e81d27f0fb9fac5964e929f2d106d5646b1bf0175c1c60abc1799d1ad0f4a" exitCode=0 Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.307299 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gb54f" event={"ID":"111ca852-fddd-4fb1-8d5d-331fd5921a71","Type":"ContainerDied","Data":"ea8e81d27f0fb9fac5964e929f2d106d5646b1bf0175c1c60abc1799d1ad0f4a"} Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.307332 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gb54f" event={"ID":"111ca852-fddd-4fb1-8d5d-331fd5921a71","Type":"ContainerDied","Data":"2852e87c3d90e55a145bff8290e54ac7389fb8f75ebcdc35c688bd52463d5985"} Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.307358 4732 scope.go:117] "RemoveContainer" containerID="ea8e81d27f0fb9fac5964e929f2d106d5646b1bf0175c1c60abc1799d1ad0f4a" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.307503 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gb54f" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.310630 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320c2656-6f30-4922-835e-8c27a82800b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "320c2656-6f30-4922-835e-8c27a82800b1" (UID: "320c2656-6f30-4922-835e-8c27a82800b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.315069 4732 generic.go:334] "Generic (PLEG): container finished" podID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" containerID="32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc" exitCode=0 Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.315168 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7ngt" event={"ID":"3d6ffd83-fb99-48e0-a34a-fd365f971ef1","Type":"ContainerDied","Data":"32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc"} Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.315117 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7ngt" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.315207 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7ngt" event={"ID":"3d6ffd83-fb99-48e0-a34a-fd365f971ef1","Type":"ContainerDied","Data":"628f0999201f984caa436318589dd8628cafb6a41db0673c62163c5cc780a5ff"} Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.319591 4732 generic.go:334] "Generic (PLEG): container finished" podID="b03cae03-72c1-4b13-8031-33381e6df48a" containerID="3f07a87f998adaa1b0b96893986a963301b7919b28a28c13a83cb15126a9cbc6" exitCode=0 Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.319679 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdcdv" event={"ID":"b03cae03-72c1-4b13-8031-33381e6df48a","Type":"ContainerDied","Data":"3f07a87f998adaa1b0b96893986a963301b7919b28a28c13a83cb15126a9cbc6"} Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.319701 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdcdv" event={"ID":"b03cae03-72c1-4b13-8031-33381e6df48a","Type":"ContainerDied","Data":"b2573936f6c20f59b9e6d6e1b605a68abfd082c81e607c5222d95cd6ee9a3427"} Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.319703 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdcdv" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.322137 4732 generic.go:334] "Generic (PLEG): container finished" podID="320c2656-6f30-4922-835e-8c27a82800b1" containerID="c9d4c116354d5c6376aaf4e43c684f7a010f6b0330ee1591e2d15233005a8f2d" exitCode=0 Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.322207 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtg8l" event={"ID":"320c2656-6f30-4922-835e-8c27a82800b1","Type":"ContainerDied","Data":"c9d4c116354d5c6376aaf4e43c684f7a010f6b0330ee1591e2d15233005a8f2d"} Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.322237 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtg8l" event={"ID":"320c2656-6f30-4922-835e-8c27a82800b1","Type":"ContainerDied","Data":"7ba9bda66cde21334a1fb904223442bb2f107c035dd197b8f6160f7ac322e79d"} Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.322299 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtg8l" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.327451 4732 generic.go:334] "Generic (PLEG): container finished" podID="f4d0ed50-aa9b-4a62-b340-882ddf73f008" containerID="393baf9619f9cf20095d6f558e1acd40e97479a63b2c71e28acd340a9b8c0183" exitCode=0 Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.327483 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" event={"ID":"f4d0ed50-aa9b-4a62-b340-882ddf73f008","Type":"ContainerDied","Data":"393baf9619f9cf20095d6f558e1acd40e97479a63b2c71e28acd340a9b8c0183"} Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.327502 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" event={"ID":"f4d0ed50-aa9b-4a62-b340-882ddf73f008","Type":"ContainerDied","Data":"94900506e78a6c467e226df2c51bd0deeed7b70de0ce789b914b8672f3e90a32"} Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.327592 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ljds4" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.328209 4732 scope.go:117] "RemoveContainer" containerID="3a1d4a96a5189510003839cb3107e60917aaf8cd548467e8cbeac4847c512fbd" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.359089 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320c2656-6f30-4922-835e-8c27a82800b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.359436 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7ngt"] Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.361846 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7ngt"] Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.373882 4732 scope.go:117] "RemoveContainer" containerID="b12c70aaf2deee35af9c59e7cdfeb63c5438b4e4668e33b49b029529e03925eb" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.374807 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rtg8l"] Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.378328 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rtg8l"] Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.392655 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b03cae03-72c1-4b13-8031-33381e6df48a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b03cae03-72c1-4b13-8031-33381e6df48a" (UID: "b03cae03-72c1-4b13-8031-33381e6df48a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.394029 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gb54f"] Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.397538 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gb54f"] Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.399980 4732 scope.go:117] "RemoveContainer" containerID="ea8e81d27f0fb9fac5964e929f2d106d5646b1bf0175c1c60abc1799d1ad0f4a" Jan 31 09:04:56 crc kubenswrapper[4732]: E0131 09:04:56.400436 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea8e81d27f0fb9fac5964e929f2d106d5646b1bf0175c1c60abc1799d1ad0f4a\": container with ID starting with ea8e81d27f0fb9fac5964e929f2d106d5646b1bf0175c1c60abc1799d1ad0f4a not found: ID does not exist" containerID="ea8e81d27f0fb9fac5964e929f2d106d5646b1bf0175c1c60abc1799d1ad0f4a" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.400476 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea8e81d27f0fb9fac5964e929f2d106d5646b1bf0175c1c60abc1799d1ad0f4a"} err="failed to get container status \"ea8e81d27f0fb9fac5964e929f2d106d5646b1bf0175c1c60abc1799d1ad0f4a\": rpc error: code = NotFound desc = could not find container \"ea8e81d27f0fb9fac5964e929f2d106d5646b1bf0175c1c60abc1799d1ad0f4a\": container with ID starting with ea8e81d27f0fb9fac5964e929f2d106d5646b1bf0175c1c60abc1799d1ad0f4a not found: ID does not exist" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.400529 4732 scope.go:117] "RemoveContainer" containerID="3a1d4a96a5189510003839cb3107e60917aaf8cd548467e8cbeac4847c512fbd" Jan 31 09:04:56 crc kubenswrapper[4732]: E0131 09:04:56.402242 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a1d4a96a5189510003839cb3107e60917aaf8cd548467e8cbeac4847c512fbd\": container with ID starting with 3a1d4a96a5189510003839cb3107e60917aaf8cd548467e8cbeac4847c512fbd not found: ID does not exist" containerID="3a1d4a96a5189510003839cb3107e60917aaf8cd548467e8cbeac4847c512fbd" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.402308 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a1d4a96a5189510003839cb3107e60917aaf8cd548467e8cbeac4847c512fbd"} err="failed to get container status \"3a1d4a96a5189510003839cb3107e60917aaf8cd548467e8cbeac4847c512fbd\": rpc error: code = NotFound desc = could not find container \"3a1d4a96a5189510003839cb3107e60917aaf8cd548467e8cbeac4847c512fbd\": container with ID starting with 3a1d4a96a5189510003839cb3107e60917aaf8cd548467e8cbeac4847c512fbd not found: ID does not exist" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.402346 4732 scope.go:117] "RemoveContainer" containerID="b12c70aaf2deee35af9c59e7cdfeb63c5438b4e4668e33b49b029529e03925eb" Jan 31 09:04:56 crc kubenswrapper[4732]: E0131 09:04:56.404145 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b12c70aaf2deee35af9c59e7cdfeb63c5438b4e4668e33b49b029529e03925eb\": container with ID starting with b12c70aaf2deee35af9c59e7cdfeb63c5438b4e4668e33b49b029529e03925eb not found: ID does not exist" containerID="b12c70aaf2deee35af9c59e7cdfeb63c5438b4e4668e33b49b029529e03925eb" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.404181 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b12c70aaf2deee35af9c59e7cdfeb63c5438b4e4668e33b49b029529e03925eb"} err="failed to get container status \"b12c70aaf2deee35af9c59e7cdfeb63c5438b4e4668e33b49b029529e03925eb\": rpc error: code = NotFound desc = could not find container \"b12c70aaf2deee35af9c59e7cdfeb63c5438b4e4668e33b49b029529e03925eb\": container with ID starting with b12c70aaf2deee35af9c59e7cdfeb63c5438b4e4668e33b49b029529e03925eb not found: ID does not exist" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.404211 4732 scope.go:117] "RemoveContainer" containerID="32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.408746 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ljds4"] Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.411275 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ljds4"] Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.419557 4732 scope.go:117] "RemoveContainer" containerID="ca7d5f833a7fd7f21418e6fb1ccdd79977cfae417daaca1c00afcd2310c6650b" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.432498 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8bchs"] Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.438680 4732 scope.go:117] "RemoveContainer" containerID="c749f49cfbc861f45b12320940264abaaf152474c82e669652462baab6c5bea1" Jan 31 09:04:56 crc kubenswrapper[4732]: W0131 09:04:56.445321 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0dbfc52_f4e9_462a_a253_2bb950c04e7b.slice/crio-a38616795b18c0402ddac1b409f6ae8b51cd664284f98e405e9a18d7c40ad6fa WatchSource:0}: Error finding container a38616795b18c0402ddac1b409f6ae8b51cd664284f98e405e9a18d7c40ad6fa: Status 404 returned error can't find the container with id a38616795b18c0402ddac1b409f6ae8b51cd664284f98e405e9a18d7c40ad6fa Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.459895 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03cae03-72c1-4b13-8031-33381e6df48a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.462053 4732 scope.go:117] "RemoveContainer" containerID="32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc" Jan 31 09:04:56 crc kubenswrapper[4732]: E0131 09:04:56.462401 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc\": container with ID starting with 32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc not found: ID does not exist" containerID="32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.462430 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc"} err="failed to get container status \"32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc\": rpc error: code = NotFound desc = could not find container \"32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc\": container with ID starting with 32ebfc35a37c1e9b6779e2a2b5a09f537a17be1ab00055bd8fd9c0a3bc8002bc not found: ID does not exist" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.462450 4732 scope.go:117] "RemoveContainer" containerID="ca7d5f833a7fd7f21418e6fb1ccdd79977cfae417daaca1c00afcd2310c6650b" Jan 31 09:04:56 crc kubenswrapper[4732]: E0131 09:04:56.463028 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca7d5f833a7fd7f21418e6fb1ccdd79977cfae417daaca1c00afcd2310c6650b\": container with ID starting with ca7d5f833a7fd7f21418e6fb1ccdd79977cfae417daaca1c00afcd2310c6650b not found: ID does not exist" containerID="ca7d5f833a7fd7f21418e6fb1ccdd79977cfae417daaca1c00afcd2310c6650b" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.463089 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca7d5f833a7fd7f21418e6fb1ccdd79977cfae417daaca1c00afcd2310c6650b"} err="failed to get container status \"ca7d5f833a7fd7f21418e6fb1ccdd79977cfae417daaca1c00afcd2310c6650b\": rpc error: code = NotFound desc = could not find container \"ca7d5f833a7fd7f21418e6fb1ccdd79977cfae417daaca1c00afcd2310c6650b\": container with ID starting with ca7d5f833a7fd7f21418e6fb1ccdd79977cfae417daaca1c00afcd2310c6650b not found: ID does not exist" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.463119 4732 scope.go:117] "RemoveContainer" containerID="c749f49cfbc861f45b12320940264abaaf152474c82e669652462baab6c5bea1" Jan 31 09:04:56 crc kubenswrapper[4732]: E0131 09:04:56.463384 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c749f49cfbc861f45b12320940264abaaf152474c82e669652462baab6c5bea1\": container with ID starting with c749f49cfbc861f45b12320940264abaaf152474c82e669652462baab6c5bea1 not found: ID does not exist" containerID="c749f49cfbc861f45b12320940264abaaf152474c82e669652462baab6c5bea1" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.463413 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c749f49cfbc861f45b12320940264abaaf152474c82e669652462baab6c5bea1"} err="failed to get container status \"c749f49cfbc861f45b12320940264abaaf152474c82e669652462baab6c5bea1\": rpc error: code = NotFound desc = could not find container \"c749f49cfbc861f45b12320940264abaaf152474c82e669652462baab6c5bea1\": container with ID starting with c749f49cfbc861f45b12320940264abaaf152474c82e669652462baab6c5bea1 not found: ID does not exist" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.463427 4732 scope.go:117] "RemoveContainer" containerID="3f07a87f998adaa1b0b96893986a963301b7919b28a28c13a83cb15126a9cbc6" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.478843 4732 scope.go:117] "RemoveContainer" containerID="cb6fa0e2f74568ca7be1a8e4ce25991aeb89ab0933a72f514639c1cb8e80b123" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.494928 4732 scope.go:117] "RemoveContainer" containerID="4468cd510de754308c18a9a7af365ee982a0fdce8768e86c8b2291d7c729cfc1" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.529313 4732 scope.go:117] "RemoveContainer" containerID="3f07a87f998adaa1b0b96893986a963301b7919b28a28c13a83cb15126a9cbc6" Jan 31 09:04:56 crc kubenswrapper[4732]: E0131 09:04:56.529793 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f07a87f998adaa1b0b96893986a963301b7919b28a28c13a83cb15126a9cbc6\": container with ID starting with 3f07a87f998adaa1b0b96893986a963301b7919b28a28c13a83cb15126a9cbc6 not found: ID does not exist" containerID="3f07a87f998adaa1b0b96893986a963301b7919b28a28c13a83cb15126a9cbc6" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.529833 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f07a87f998adaa1b0b96893986a963301b7919b28a28c13a83cb15126a9cbc6"} err="failed to get container status \"3f07a87f998adaa1b0b96893986a963301b7919b28a28c13a83cb15126a9cbc6\": rpc error: code = NotFound desc = could not find container \"3f07a87f998adaa1b0b96893986a963301b7919b28a28c13a83cb15126a9cbc6\": container with ID starting with 3f07a87f998adaa1b0b96893986a963301b7919b28a28c13a83cb15126a9cbc6 not found: ID does not exist" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.529867 4732 scope.go:117] "RemoveContainer" containerID="cb6fa0e2f74568ca7be1a8e4ce25991aeb89ab0933a72f514639c1cb8e80b123" Jan 31 09:04:56 crc kubenswrapper[4732]: E0131 09:04:56.530133 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb6fa0e2f74568ca7be1a8e4ce25991aeb89ab0933a72f514639c1cb8e80b123\": container with ID starting with cb6fa0e2f74568ca7be1a8e4ce25991aeb89ab0933a72f514639c1cb8e80b123 not found: ID does not exist" containerID="cb6fa0e2f74568ca7be1a8e4ce25991aeb89ab0933a72f514639c1cb8e80b123" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.530175 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb6fa0e2f74568ca7be1a8e4ce25991aeb89ab0933a72f514639c1cb8e80b123"} err="failed to get container status \"cb6fa0e2f74568ca7be1a8e4ce25991aeb89ab0933a72f514639c1cb8e80b123\": rpc error: code = NotFound desc = could not find container \"cb6fa0e2f74568ca7be1a8e4ce25991aeb89ab0933a72f514639c1cb8e80b123\": container with ID starting with cb6fa0e2f74568ca7be1a8e4ce25991aeb89ab0933a72f514639c1cb8e80b123 not found: ID does not exist" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.530198 4732 scope.go:117] "RemoveContainer" containerID="4468cd510de754308c18a9a7af365ee982a0fdce8768e86c8b2291d7c729cfc1" Jan 31 09:04:56 crc kubenswrapper[4732]: E0131 09:04:56.530629 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4468cd510de754308c18a9a7af365ee982a0fdce8768e86c8b2291d7c729cfc1\": container with ID starting with 4468cd510de754308c18a9a7af365ee982a0fdce8768e86c8b2291d7c729cfc1 not found: ID does not exist" containerID="4468cd510de754308c18a9a7af365ee982a0fdce8768e86c8b2291d7c729cfc1" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.530654 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4468cd510de754308c18a9a7af365ee982a0fdce8768e86c8b2291d7c729cfc1"} err="failed to get container status \"4468cd510de754308c18a9a7af365ee982a0fdce8768e86c8b2291d7c729cfc1\": rpc error: code = NotFound desc = could not find container \"4468cd510de754308c18a9a7af365ee982a0fdce8768e86c8b2291d7c729cfc1\": container with ID starting with 4468cd510de754308c18a9a7af365ee982a0fdce8768e86c8b2291d7c729cfc1 not found: ID does not exist" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.530707 4732 scope.go:117] "RemoveContainer" containerID="c9d4c116354d5c6376aaf4e43c684f7a010f6b0330ee1591e2d15233005a8f2d" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.549965 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="111ca852-fddd-4fb1-8d5d-331fd5921a71" path="/var/lib/kubelet/pods/111ca852-fddd-4fb1-8d5d-331fd5921a71/volumes" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.551171 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="320c2656-6f30-4922-835e-8c27a82800b1" path="/var/lib/kubelet/pods/320c2656-6f30-4922-835e-8c27a82800b1/volumes" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.552180 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" path="/var/lib/kubelet/pods/3d6ffd83-fb99-48e0-a34a-fd365f971ef1/volumes" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.554061 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d0ed50-aa9b-4a62-b340-882ddf73f008" path="/var/lib/kubelet/pods/f4d0ed50-aa9b-4a62-b340-882ddf73f008/volumes" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.556918 4732 scope.go:117] "RemoveContainer" containerID="7300efb33be6a4fcedc56ac9376d4a558057b335ad7ac3766bbdc2dea4d36c86" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.583034 4732 scope.go:117] "RemoveContainer" containerID="9a69e31450d3f2f57f031d90e49b124d53417614f5829c208f10b5b95ffda670" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.659970 4732 scope.go:117] "RemoveContainer" containerID="c9d4c116354d5c6376aaf4e43c684f7a010f6b0330ee1591e2d15233005a8f2d" Jan 31 09:04:56 crc kubenswrapper[4732]: E0131 09:04:56.662604 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9d4c116354d5c6376aaf4e43c684f7a010f6b0330ee1591e2d15233005a8f2d\": container with ID starting with c9d4c116354d5c6376aaf4e43c684f7a010f6b0330ee1591e2d15233005a8f2d not found: ID does not exist" containerID="c9d4c116354d5c6376aaf4e43c684f7a010f6b0330ee1591e2d15233005a8f2d" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.662648 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9d4c116354d5c6376aaf4e43c684f7a010f6b0330ee1591e2d15233005a8f2d"} err="failed to get container status \"c9d4c116354d5c6376aaf4e43c684f7a010f6b0330ee1591e2d15233005a8f2d\": rpc error: code = NotFound desc = could not find container \"c9d4c116354d5c6376aaf4e43c684f7a010f6b0330ee1591e2d15233005a8f2d\": container with ID starting with c9d4c116354d5c6376aaf4e43c684f7a010f6b0330ee1591e2d15233005a8f2d not found: ID does not exist" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.662694 4732 scope.go:117] "RemoveContainer" containerID="7300efb33be6a4fcedc56ac9376d4a558057b335ad7ac3766bbdc2dea4d36c86" Jan 31 09:04:56 crc kubenswrapper[4732]: E0131 09:04:56.666100 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7300efb33be6a4fcedc56ac9376d4a558057b335ad7ac3766bbdc2dea4d36c86\": container with ID starting with 7300efb33be6a4fcedc56ac9376d4a558057b335ad7ac3766bbdc2dea4d36c86 not found: ID does not exist" containerID="7300efb33be6a4fcedc56ac9376d4a558057b335ad7ac3766bbdc2dea4d36c86" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.666147 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7300efb33be6a4fcedc56ac9376d4a558057b335ad7ac3766bbdc2dea4d36c86"} err="failed to get container status \"7300efb33be6a4fcedc56ac9376d4a558057b335ad7ac3766bbdc2dea4d36c86\": rpc error: code = NotFound desc = could not find container \"7300efb33be6a4fcedc56ac9376d4a558057b335ad7ac3766bbdc2dea4d36c86\": container with ID starting with 7300efb33be6a4fcedc56ac9376d4a558057b335ad7ac3766bbdc2dea4d36c86 not found: ID does not exist" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.666180 4732 scope.go:117] "RemoveContainer" containerID="9a69e31450d3f2f57f031d90e49b124d53417614f5829c208f10b5b95ffda670" Jan 31 09:04:56 crc kubenswrapper[4732]: E0131 09:04:56.666456 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a69e31450d3f2f57f031d90e49b124d53417614f5829c208f10b5b95ffda670\": container with ID starting with 9a69e31450d3f2f57f031d90e49b124d53417614f5829c208f10b5b95ffda670 not found: ID does not exist" containerID="9a69e31450d3f2f57f031d90e49b124d53417614f5829c208f10b5b95ffda670" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.666483 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a69e31450d3f2f57f031d90e49b124d53417614f5829c208f10b5b95ffda670"} err="failed to get container status \"9a69e31450d3f2f57f031d90e49b124d53417614f5829c208f10b5b95ffda670\": rpc error: code = NotFound desc = could not find container \"9a69e31450d3f2f57f031d90e49b124d53417614f5829c208f10b5b95ffda670\": container with ID starting with 9a69e31450d3f2f57f031d90e49b124d53417614f5829c208f10b5b95ffda670 not found: ID does not exist" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.666499 4732 scope.go:117] "RemoveContainer" containerID="393baf9619f9cf20095d6f558e1acd40e97479a63b2c71e28acd340a9b8c0183" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.681643 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vdcdv"] Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.683596 4732 scope.go:117] "RemoveContainer" containerID="393baf9619f9cf20095d6f558e1acd40e97479a63b2c71e28acd340a9b8c0183" Jan 31 09:04:56 crc kubenswrapper[4732]: E0131 09:04:56.686865 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"393baf9619f9cf20095d6f558e1acd40e97479a63b2c71e28acd340a9b8c0183\": container with ID starting with 393baf9619f9cf20095d6f558e1acd40e97479a63b2c71e28acd340a9b8c0183 not found: ID does not exist" containerID="393baf9619f9cf20095d6f558e1acd40e97479a63b2c71e28acd340a9b8c0183" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.686906 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"393baf9619f9cf20095d6f558e1acd40e97479a63b2c71e28acd340a9b8c0183"} err="failed to get container status \"393baf9619f9cf20095d6f558e1acd40e97479a63b2c71e28acd340a9b8c0183\": rpc error: code = NotFound desc = could not find container \"393baf9619f9cf20095d6f558e1acd40e97479a63b2c71e28acd340a9b8c0183\": container with ID starting with 393baf9619f9cf20095d6f558e1acd40e97479a63b2c71e28acd340a9b8c0183 not found: ID does not exist" Jan 31 09:04:56 crc kubenswrapper[4732]: I0131 09:04:56.689485 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vdcdv"] Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.334727 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" event={"ID":"d0dbfc52-f4e9-462a-a253-2bb950c04e7b","Type":"ContainerStarted","Data":"d4b70b882cd4306a7b243faa137df216be0949001c889fd2898ec0ccea609c0a"} Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.335983 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.336095 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" event={"ID":"d0dbfc52-f4e9-462a-a253-2bb950c04e7b","Type":"ContainerStarted","Data":"a38616795b18c0402ddac1b409f6ae8b51cd664284f98e405e9a18d7c40ad6fa"} Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.341767 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.355171 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8bchs" podStartSLOduration=2.355145179 podStartE2EDuration="2.355145179s" podCreationTimestamp="2026-01-31 09:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:04:57.350298365 +0000 UTC m=+235.656174569" watchObservedRunningTime="2026-01-31 09:04:57.355145179 +0000 UTC m=+235.661021383" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593238 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-krjtb"] Jan 31 09:04:57 crc kubenswrapper[4732]: E0131 09:04:57.593496 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03cae03-72c1-4b13-8031-33381e6df48a" containerName="extract-content" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593510 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03cae03-72c1-4b13-8031-33381e6df48a" containerName="extract-content" Jan 31 09:04:57 crc kubenswrapper[4732]: E0131 09:04:57.593521 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111ca852-fddd-4fb1-8d5d-331fd5921a71" containerName="registry-server" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593527 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="111ca852-fddd-4fb1-8d5d-331fd5921a71" containerName="registry-server" Jan 31 09:04:57 crc kubenswrapper[4732]: E0131 09:04:57.593535 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320c2656-6f30-4922-835e-8c27a82800b1" containerName="extract-utilities" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593541 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="320c2656-6f30-4922-835e-8c27a82800b1" containerName="extract-utilities" Jan 31 09:04:57 crc kubenswrapper[4732]: E0131 09:04:57.593551 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111ca852-fddd-4fb1-8d5d-331fd5921a71" containerName="extract-utilities" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593556 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="111ca852-fddd-4fb1-8d5d-331fd5921a71" containerName="extract-utilities" Jan 31 09:04:57 crc kubenswrapper[4732]: E0131 09:04:57.593565 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320c2656-6f30-4922-835e-8c27a82800b1" containerName="registry-server" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593571 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="320c2656-6f30-4922-835e-8c27a82800b1" containerName="registry-server" Jan 31 09:04:57 crc kubenswrapper[4732]: E0131 09:04:57.593583 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320c2656-6f30-4922-835e-8c27a82800b1" containerName="extract-content" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593591 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="320c2656-6f30-4922-835e-8c27a82800b1" containerName="extract-content" Jan 31 09:04:57 crc kubenswrapper[4732]: E0131 09:04:57.593601 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" containerName="extract-utilities" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593608 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" containerName="extract-utilities" Jan 31 09:04:57 crc kubenswrapper[4732]: E0131 09:04:57.593620 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111ca852-fddd-4fb1-8d5d-331fd5921a71" containerName="extract-content" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593627 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="111ca852-fddd-4fb1-8d5d-331fd5921a71" containerName="extract-content" Jan 31 09:04:57 crc kubenswrapper[4732]: E0131 09:04:57.593639 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d0ed50-aa9b-4a62-b340-882ddf73f008" containerName="marketplace-operator" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593647 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d0ed50-aa9b-4a62-b340-882ddf73f008" containerName="marketplace-operator" Jan 31 09:04:57 crc kubenswrapper[4732]: E0131 09:04:57.593655 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" containerName="registry-server" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593679 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" containerName="registry-server" Jan 31 09:04:57 crc kubenswrapper[4732]: E0131 09:04:57.593686 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03cae03-72c1-4b13-8031-33381e6df48a" containerName="extract-utilities" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593692 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03cae03-72c1-4b13-8031-33381e6df48a" containerName="extract-utilities" Jan 31 09:04:57 crc kubenswrapper[4732]: E0131 09:04:57.593700 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03cae03-72c1-4b13-8031-33381e6df48a" containerName="registry-server" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593706 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03cae03-72c1-4b13-8031-33381e6df48a" containerName="registry-server" Jan 31 09:04:57 crc kubenswrapper[4732]: E0131 09:04:57.593714 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" containerName="extract-content" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593720 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" containerName="extract-content" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593817 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="111ca852-fddd-4fb1-8d5d-331fd5921a71" containerName="registry-server" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593830 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="320c2656-6f30-4922-835e-8c27a82800b1" containerName="registry-server" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593850 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6ffd83-fb99-48e0-a34a-fd365f971ef1" containerName="registry-server" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593860 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03cae03-72c1-4b13-8031-33381e6df48a" containerName="registry-server" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.593870 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d0ed50-aa9b-4a62-b340-882ddf73f008" containerName="marketplace-operator" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.594750 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.598131 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.607128 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-krjtb"] Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.674602 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e07aee-c4b1-4011-8442-c6dcfc4f415c-utilities\") pod \"redhat-marketplace-krjtb\" (UID: \"17e07aee-c4b1-4011-8442-c6dcfc4f415c\") " pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.674674 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e07aee-c4b1-4011-8442-c6dcfc4f415c-catalog-content\") pod \"redhat-marketplace-krjtb\" (UID: \"17e07aee-c4b1-4011-8442-c6dcfc4f415c\") " pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.674704 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9m8t\" (UniqueName: \"kubernetes.io/projected/17e07aee-c4b1-4011-8442-c6dcfc4f415c-kube-api-access-l9m8t\") pod \"redhat-marketplace-krjtb\" (UID: \"17e07aee-c4b1-4011-8442-c6dcfc4f415c\") " pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.776367 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e07aee-c4b1-4011-8442-c6dcfc4f415c-utilities\") pod \"redhat-marketplace-krjtb\" (UID: \"17e07aee-c4b1-4011-8442-c6dcfc4f415c\") " pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.776415 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e07aee-c4b1-4011-8442-c6dcfc4f415c-catalog-content\") pod \"redhat-marketplace-krjtb\" (UID: \"17e07aee-c4b1-4011-8442-c6dcfc4f415c\") " pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.776442 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9m8t\" (UniqueName: \"kubernetes.io/projected/17e07aee-c4b1-4011-8442-c6dcfc4f415c-kube-api-access-l9m8t\") pod \"redhat-marketplace-krjtb\" (UID: \"17e07aee-c4b1-4011-8442-c6dcfc4f415c\") " pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.776943 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e07aee-c4b1-4011-8442-c6dcfc4f415c-utilities\") pod \"redhat-marketplace-krjtb\" (UID: \"17e07aee-c4b1-4011-8442-c6dcfc4f415c\") " pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.777179 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e07aee-c4b1-4011-8442-c6dcfc4f415c-catalog-content\") pod \"redhat-marketplace-krjtb\" (UID: \"17e07aee-c4b1-4011-8442-c6dcfc4f415c\") " pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.796557 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9m8t\" (UniqueName: \"kubernetes.io/projected/17e07aee-c4b1-4011-8442-c6dcfc4f415c-kube-api-access-l9m8t\") pod \"redhat-marketplace-krjtb\" (UID: \"17e07aee-c4b1-4011-8442-c6dcfc4f415c\") " pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:04:57 crc kubenswrapper[4732]: I0131 09:04:57.907851 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:04:58 crc kubenswrapper[4732]: I0131 09:04:58.191490 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2h57x"] Jan 31 09:04:58 crc kubenswrapper[4732]: I0131 09:04:58.192598 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:04:58 crc kubenswrapper[4732]: I0131 09:04:58.194337 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 09:04:58 crc kubenswrapper[4732]: I0131 09:04:58.204835 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2h57x"] Jan 31 09:04:58 crc kubenswrapper[4732]: I0131 09:04:58.283426 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9039963e-96e4-4b4d-abdd-79f0429da944-catalog-content\") pod \"certified-operators-2h57x\" (UID: \"9039963e-96e4-4b4d-abdd-79f0429da944\") " pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:04:58 crc kubenswrapper[4732]: I0131 09:04:58.283673 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9039963e-96e4-4b4d-abdd-79f0429da944-utilities\") pod \"certified-operators-2h57x\" (UID: \"9039963e-96e4-4b4d-abdd-79f0429da944\") " pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:04:58 crc kubenswrapper[4732]: I0131 09:04:58.283757 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4k9\" (UniqueName: \"kubernetes.io/projected/9039963e-96e4-4b4d-abdd-79f0429da944-kube-api-access-9v4k9\") pod \"certified-operators-2h57x\" (UID: \"9039963e-96e4-4b4d-abdd-79f0429da944\") " pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:04:58 crc kubenswrapper[4732]: I0131 09:04:58.317579 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-krjtb"] Jan 31 09:04:58 crc kubenswrapper[4732]: W0131 09:04:58.322856 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17e07aee_c4b1_4011_8442_c6dcfc4f415c.slice/crio-43b9d1425fde7694bc4eec8c3cd8bed52a252a7fe324c00d8813428ba196e5ec WatchSource:0}: Error finding container 43b9d1425fde7694bc4eec8c3cd8bed52a252a7fe324c00d8813428ba196e5ec: Status 404 returned error can't find the container with id 43b9d1425fde7694bc4eec8c3cd8bed52a252a7fe324c00d8813428ba196e5ec Jan 31 09:04:59 crc kubenswrapper[4732]: I0131 09:04:59.007448 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9039963e-96e4-4b4d-abdd-79f0429da944-catalog-content\") pod \"certified-operators-2h57x\" (UID: \"9039963e-96e4-4b4d-abdd-79f0429da944\") " pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:04:59 crc kubenswrapper[4732]: I0131 09:04:59.007544 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9039963e-96e4-4b4d-abdd-79f0429da944-utilities\") pod \"certified-operators-2h57x\" (UID: \"9039963e-96e4-4b4d-abdd-79f0429da944\") " pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:04:59 crc kubenswrapper[4732]: I0131 09:04:59.007597 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v4k9\" (UniqueName: \"kubernetes.io/projected/9039963e-96e4-4b4d-abdd-79f0429da944-kube-api-access-9v4k9\") pod \"certified-operators-2h57x\" (UID: \"9039963e-96e4-4b4d-abdd-79f0429da944\") " pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:04:59 crc kubenswrapper[4732]: I0131 09:04:59.008628 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9039963e-96e4-4b4d-abdd-79f0429da944-catalog-content\") pod \"certified-operators-2h57x\" (UID: \"9039963e-96e4-4b4d-abdd-79f0429da944\") " pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:04:59 crc kubenswrapper[4732]: I0131 09:04:59.008874 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9039963e-96e4-4b4d-abdd-79f0429da944-utilities\") pod \"certified-operators-2h57x\" (UID: \"9039963e-96e4-4b4d-abdd-79f0429da944\") " pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:04:59 crc kubenswrapper[4732]: I0131 09:04:59.017592 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03cae03-72c1-4b13-8031-33381e6df48a" path="/var/lib/kubelet/pods/b03cae03-72c1-4b13-8031-33381e6df48a/volumes" Jan 31 09:04:59 crc kubenswrapper[4732]: I0131 09:04:59.018372 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krjtb" event={"ID":"17e07aee-c4b1-4011-8442-c6dcfc4f415c","Type":"ContainerStarted","Data":"43b9d1425fde7694bc4eec8c3cd8bed52a252a7fe324c00d8813428ba196e5ec"} Jan 31 09:04:59 crc kubenswrapper[4732]: I0131 09:04:59.035515 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v4k9\" (UniqueName: \"kubernetes.io/projected/9039963e-96e4-4b4d-abdd-79f0429da944-kube-api-access-9v4k9\") pod \"certified-operators-2h57x\" (UID: \"9039963e-96e4-4b4d-abdd-79f0429da944\") " pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:04:59 crc kubenswrapper[4732]: I0131 09:04:59.111010 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:04:59 crc kubenswrapper[4732]: I0131 09:04:59.512256 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2h57x"] Jan 31 09:04:59 crc kubenswrapper[4732]: I0131 09:04:59.990342 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4pkzq"] Jan 31 09:04:59 crc kubenswrapper[4732]: I0131 09:04:59.991438 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:04:59 crc kubenswrapper[4732]: I0131 09:04:59.994003 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.001777 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4pkzq"] Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.044773 4732 generic.go:334] "Generic (PLEG): container finished" podID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" containerID="df94e93d1915945c57db95a4648474124da0667129113eae9e51acdd65857bc4" exitCode=0 Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.044873 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krjtb" event={"ID":"17e07aee-c4b1-4011-8442-c6dcfc4f415c","Type":"ContainerDied","Data":"df94e93d1915945c57db95a4648474124da0667129113eae9e51acdd65857bc4"} Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.049418 4732 generic.go:334] "Generic (PLEG): container finished" podID="9039963e-96e4-4b4d-abdd-79f0429da944" containerID="f4798e8742e6ebd4b4d72427d15eb2585f404bd738e00648ac21d6ea1c06fa8c" exitCode=0 Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.049565 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2h57x" event={"ID":"9039963e-96e4-4b4d-abdd-79f0429da944","Type":"ContainerDied","Data":"f4798e8742e6ebd4b4d72427d15eb2585f404bd738e00648ac21d6ea1c06fa8c"} Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.049846 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2h57x" event={"ID":"9039963e-96e4-4b4d-abdd-79f0429da944","Type":"ContainerStarted","Data":"f6925f718ca946b7446cf11c11588d4363a885a606a6a0c54f739e78130ded85"} Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.131927 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sgb5\" (UniqueName: \"kubernetes.io/projected/a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45-kube-api-access-2sgb5\") pod \"redhat-operators-4pkzq\" (UID: \"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45\") " pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.132046 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45-catalog-content\") pod \"redhat-operators-4pkzq\" (UID: \"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45\") " pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.132178 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45-utilities\") pod \"redhat-operators-4pkzq\" (UID: \"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45\") " pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.233885 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45-catalog-content\") pod \"redhat-operators-4pkzq\" (UID: \"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45\") " pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.234313 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45-utilities\") pod \"redhat-operators-4pkzq\" (UID: \"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45\") " pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.234501 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sgb5\" (UniqueName: \"kubernetes.io/projected/a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45-kube-api-access-2sgb5\") pod \"redhat-operators-4pkzq\" (UID: \"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45\") " pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.234763 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45-utilities\") pod \"redhat-operators-4pkzq\" (UID: \"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45\") " pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.234434 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45-catalog-content\") pod \"redhat-operators-4pkzq\" (UID: \"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45\") " pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.272847 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sgb5\" (UniqueName: \"kubernetes.io/projected/a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45-kube-api-access-2sgb5\") pod \"redhat-operators-4pkzq\" (UID: \"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45\") " pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.310274 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.597808 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d7jxg"] Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.599749 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.600325 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d7jxg"] Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.603840 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.710230 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4pkzq"] Jan 31 09:05:00 crc kubenswrapper[4732]: W0131 09:05:00.724604 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda39f958d_6d9b_4a4a_9ec9_cfb1f96b6f45.slice/crio-18f8e635fe93be4696f889e2fc45f7c0593a284c1d2aa8aad38fa3d009850a26 WatchSource:0}: Error finding container 18f8e635fe93be4696f889e2fc45f7c0593a284c1d2aa8aad38fa3d009850a26: Status 404 returned error can't find the container with id 18f8e635fe93be4696f889e2fc45f7c0593a284c1d2aa8aad38fa3d009850a26 Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.741920 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6pzr\" (UniqueName: \"kubernetes.io/projected/a7533049-a0d8-4488-bed6-2a9b28212061-kube-api-access-h6pzr\") pod \"community-operators-d7jxg\" (UID: \"a7533049-a0d8-4488-bed6-2a9b28212061\") " pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.742013 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7533049-a0d8-4488-bed6-2a9b28212061-catalog-content\") pod \"community-operators-d7jxg\" (UID: \"a7533049-a0d8-4488-bed6-2a9b28212061\") " pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.742078 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7533049-a0d8-4488-bed6-2a9b28212061-utilities\") pod \"community-operators-d7jxg\" (UID: \"a7533049-a0d8-4488-bed6-2a9b28212061\") " pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.843921 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7533049-a0d8-4488-bed6-2a9b28212061-catalog-content\") pod \"community-operators-d7jxg\" (UID: \"a7533049-a0d8-4488-bed6-2a9b28212061\") " pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.844019 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7533049-a0d8-4488-bed6-2a9b28212061-utilities\") pod \"community-operators-d7jxg\" (UID: \"a7533049-a0d8-4488-bed6-2a9b28212061\") " pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.844071 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6pzr\" (UniqueName: \"kubernetes.io/projected/a7533049-a0d8-4488-bed6-2a9b28212061-kube-api-access-h6pzr\") pod \"community-operators-d7jxg\" (UID: \"a7533049-a0d8-4488-bed6-2a9b28212061\") " pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.844429 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7533049-a0d8-4488-bed6-2a9b28212061-catalog-content\") pod \"community-operators-d7jxg\" (UID: \"a7533049-a0d8-4488-bed6-2a9b28212061\") " pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.844525 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7533049-a0d8-4488-bed6-2a9b28212061-utilities\") pod \"community-operators-d7jxg\" (UID: \"a7533049-a0d8-4488-bed6-2a9b28212061\") " pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.865988 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6pzr\" (UniqueName: \"kubernetes.io/projected/a7533049-a0d8-4488-bed6-2a9b28212061-kube-api-access-h6pzr\") pod \"community-operators-d7jxg\" (UID: \"a7533049-a0d8-4488-bed6-2a9b28212061\") " pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:00 crc kubenswrapper[4732]: I0131 09:05:00.921633 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:01 crc kubenswrapper[4732]: I0131 09:05:01.061297 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2h57x" event={"ID":"9039963e-96e4-4b4d-abdd-79f0429da944","Type":"ContainerStarted","Data":"0f4992419822b81cadf13d0c8b0224684037767274dfe5b0249022e62b9c4ebd"} Jan 31 09:05:01 crc kubenswrapper[4732]: I0131 09:05:01.068284 4732 generic.go:334] "Generic (PLEG): container finished" podID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" containerID="b5b10fba37c644587760b47c06070201a3597c2ce593265462cc4c0d593c0ca0" exitCode=0 Jan 31 09:05:01 crc kubenswrapper[4732]: I0131 09:05:01.068378 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pkzq" event={"ID":"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45","Type":"ContainerDied","Data":"b5b10fba37c644587760b47c06070201a3597c2ce593265462cc4c0d593c0ca0"} Jan 31 09:05:01 crc kubenswrapper[4732]: I0131 09:05:01.068415 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pkzq" event={"ID":"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45","Type":"ContainerStarted","Data":"18f8e635fe93be4696f889e2fc45f7c0593a284c1d2aa8aad38fa3d009850a26"} Jan 31 09:05:01 crc kubenswrapper[4732]: I0131 09:05:01.076995 4732 generic.go:334] "Generic (PLEG): container finished" podID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" containerID="3628d84d76141cff468153e0bfd020a893eb5565565cd42e084eaf8dd83b3ea1" exitCode=0 Jan 31 09:05:01 crc kubenswrapper[4732]: I0131 09:05:01.077043 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krjtb" event={"ID":"17e07aee-c4b1-4011-8442-c6dcfc4f415c","Type":"ContainerDied","Data":"3628d84d76141cff468153e0bfd020a893eb5565565cd42e084eaf8dd83b3ea1"} Jan 31 09:05:01 crc kubenswrapper[4732]: I0131 09:05:01.336619 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d7jxg"] Jan 31 09:05:01 crc kubenswrapper[4732]: W0131 09:05:01.351502 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7533049_a0d8_4488_bed6_2a9b28212061.slice/crio-4f1de4847787c76739eb84dcdae2234327fee04c61499def057ec8246fbf505c WatchSource:0}: Error finding container 4f1de4847787c76739eb84dcdae2234327fee04c61499def057ec8246fbf505c: Status 404 returned error can't find the container with id 4f1de4847787c76739eb84dcdae2234327fee04c61499def057ec8246fbf505c Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.084630 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pkzq" event={"ID":"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45","Type":"ContainerStarted","Data":"79b1dd46333c264727adfb34d060a4bcca67f80ac41273dff417c69f0824a053"} Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.087010 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krjtb" event={"ID":"17e07aee-c4b1-4011-8442-c6dcfc4f415c","Type":"ContainerStarted","Data":"62dae642b0a846e49e35df8bd8ab6d5ea17e6f6b3c1acf1b5fa3cb86ea5bcd76"} Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.088300 4732 generic.go:334] "Generic (PLEG): container finished" podID="a7533049-a0d8-4488-bed6-2a9b28212061" containerID="caf869cbc215b2afbdd847a43f8f6c677e5acec31907d104d763b015257023a9" exitCode=0 Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.088352 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7jxg" event={"ID":"a7533049-a0d8-4488-bed6-2a9b28212061","Type":"ContainerDied","Data":"caf869cbc215b2afbdd847a43f8f6c677e5acec31907d104d763b015257023a9"} Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.088418 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7jxg" event={"ID":"a7533049-a0d8-4488-bed6-2a9b28212061","Type":"ContainerStarted","Data":"4f1de4847787c76739eb84dcdae2234327fee04c61499def057ec8246fbf505c"} Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.090918 4732 generic.go:334] "Generic (PLEG): container finished" podID="9039963e-96e4-4b4d-abdd-79f0429da944" containerID="0f4992419822b81cadf13d0c8b0224684037767274dfe5b0249022e62b9c4ebd" exitCode=0 Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.090961 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2h57x" event={"ID":"9039963e-96e4-4b4d-abdd-79f0429da944","Type":"ContainerDied","Data":"0f4992419822b81cadf13d0c8b0224684037767274dfe5b0249022e62b9c4ebd"} Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.130733 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-krjtb" podStartSLOduration=3.535717187 podStartE2EDuration="5.13071423s" podCreationTimestamp="2026-01-31 09:04:57 +0000 UTC" firstStartedPulling="2026-01-31 09:05:00.048259304 +0000 UTC m=+238.354135518" lastFinishedPulling="2026-01-31 09:05:01.643256357 +0000 UTC m=+239.949132561" observedRunningTime="2026-01-31 09:05:02.126450826 +0000 UTC m=+240.432327040" watchObservedRunningTime="2026-01-31 09:05:02.13071423 +0000 UTC m=+240.436590434" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.615265 4732 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.616023 4732 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.616070 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.616420 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd" gracePeriod=15 Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.616457 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7" gracePeriod=15 Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.616490 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722" gracePeriod=15 Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.616567 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9" gracePeriod=15 Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.616490 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b" gracePeriod=15 Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617227 4732 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 09:05:02 crc kubenswrapper[4732]: E0131 09:05:02.617363 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617380 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 09:05:02 crc kubenswrapper[4732]: E0131 09:05:02.617392 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617399 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 09:05:02 crc kubenswrapper[4732]: E0131 09:05:02.617411 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617417 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 09:05:02 crc kubenswrapper[4732]: E0131 09:05:02.617428 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617436 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 09:05:02 crc kubenswrapper[4732]: E0131 09:05:02.617447 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617454 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4732]: E0131 09:05:02.617463 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617469 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4732]: E0131 09:05:02.617480 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617487 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617598 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617609 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617617 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617626 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617637 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617648 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4732]: E0131 09:05:02.617815 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617825 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.617937 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 09:05:02 crc kubenswrapper[4732]: E0131 09:05:02.719048 4732 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.231:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: E0131 09:05:02.747421 4732 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.231:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-2h57x.188fc57928be0c23 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-2h57x,UID:9039963e-96e4-4b4d-abdd-79f0429da944,APIVersion:v1,ResourceVersion:29558,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 09:05:02.745152547 +0000 UTC m=+241.051028751,LastTimestamp:2026-01-31 09:05:02.745152547 +0000 UTC m=+241.051028751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.783504 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.783561 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.783596 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.783626 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.784566 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.784634 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.784701 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.784833 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886447 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886521 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886541 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886561 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886580 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886591 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886650 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886693 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886649 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886724 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886610 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886726 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886754 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886780 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886832 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:02 crc kubenswrapper[4732]: I0131 09:05:02.886862 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.021182 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:03 crc kubenswrapper[4732]: W0131 09:05:03.042190 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-4448d0e6cbfe51dbafe95df2b0b0c2bf5f10a0cced86db8ec754e8e959a9424a WatchSource:0}: Error finding container 4448d0e6cbfe51dbafe95df2b0b0c2bf5f10a0cced86db8ec754e8e959a9424a: Status 404 returned error can't find the container with id 4448d0e6cbfe51dbafe95df2b0b0c2bf5f10a0cced86db8ec754e8e959a9424a Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.100147 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4448d0e6cbfe51dbafe95df2b0b0c2bf5f10a0cced86db8ec754e8e959a9424a"} Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.102691 4732 generic.go:334] "Generic (PLEG): container finished" podID="e90ec082-a189-4726-8049-2151ddf77961" containerID="23b1edb2efd137d80917cfc98d36d9f6d054406b735c02e22b781cbf0e7d7c9e" exitCode=0 Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.102747 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e90ec082-a189-4726-8049-2151ddf77961","Type":"ContainerDied","Data":"23b1edb2efd137d80917cfc98d36d9f6d054406b735c02e22b781cbf0e7d7c9e"} Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.103539 4732 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.103763 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.105456 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2h57x" event={"ID":"9039963e-96e4-4b4d-abdd-79f0429da944","Type":"ContainerStarted","Data":"7e18dcd588ce02a5c46a844fa65ca0543996b00459817dc2f4b5a1bc8ce6068b"} Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.106421 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.106955 4732 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.107220 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.108067 4732 generic.go:334] "Generic (PLEG): container finished" podID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" containerID="79b1dd46333c264727adfb34d060a4bcca67f80ac41273dff417c69f0824a053" exitCode=0 Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.108124 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pkzq" event={"ID":"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45","Type":"ContainerDied","Data":"79b1dd46333c264727adfb34d060a4bcca67f80ac41273dff417c69f0824a053"} Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.108963 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.109241 4732 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.109485 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.109732 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.112239 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.114347 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.116048 4732 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b" exitCode=0 Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.116072 4732 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9" exitCode=0 Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.116082 4732 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722" exitCode=0 Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.116091 4732 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7" exitCode=2 Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.116165 4732 scope.go:117] "RemoveContainer" containerID="c05af9bd69451273c25ddddf6b11cdd7b8f9b6e3e16d18f1c4a962321905696e" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.118259 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7jxg" event={"ID":"a7533049-a0d8-4488-bed6-2a9b28212061","Type":"ContainerStarted","Data":"711e2d4662cca5ec3b889c4e8addee987bb0a90f328fc5a8da0fd85d02eab978"} Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.118625 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.118990 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.119463 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.119812 4732 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:03 crc kubenswrapper[4732]: I0131 09:05:03.120308 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.015989 4732 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.016560 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.125590 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.127563 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"de7e77b83a8e93a3b4873676b3b8bcdbab14d5437e495e34e69b839aa521fac3"} Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.128105 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: E0131 09:05:04.128130 4732 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.231:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.128368 4732 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.128681 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.128951 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.129188 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.129745 4732 generic.go:334] "Generic (PLEG): container finished" podID="a7533049-a0d8-4488-bed6-2a9b28212061" containerID="711e2d4662cca5ec3b889c4e8addee987bb0a90f328fc5a8da0fd85d02eab978" exitCode=0 Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.129823 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7jxg" event={"ID":"a7533049-a0d8-4488-bed6-2a9b28212061","Type":"ContainerDied","Data":"711e2d4662cca5ec3b889c4e8addee987bb0a90f328fc5a8da0fd85d02eab978"} Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.130240 4732 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.130481 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.131468 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.131724 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.131987 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.132143 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pkzq" event={"ID":"a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45","Type":"ContainerStarted","Data":"d9a3c754c27f8d01db64275a210cc130ed2e117a9a1d0dca50f48a1ad0c4749e"} Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.132820 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.133092 4732 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.133912 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.134577 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.134855 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.393120 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.394394 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.394903 4732 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.395402 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.395796 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.396054 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.505884 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e90ec082-a189-4726-8049-2151ddf77961-var-lock\") pod \"e90ec082-a189-4726-8049-2151ddf77961\" (UID: \"e90ec082-a189-4726-8049-2151ddf77961\") " Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.506029 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e90ec082-a189-4726-8049-2151ddf77961-kubelet-dir\") pod \"e90ec082-a189-4726-8049-2151ddf77961\" (UID: \"e90ec082-a189-4726-8049-2151ddf77961\") " Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.506060 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e90ec082-a189-4726-8049-2151ddf77961-kube-api-access\") pod \"e90ec082-a189-4726-8049-2151ddf77961\" (UID: \"e90ec082-a189-4726-8049-2151ddf77961\") " Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.506019 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90ec082-a189-4726-8049-2151ddf77961-var-lock" (OuterVolumeSpecName: "var-lock") pod "e90ec082-a189-4726-8049-2151ddf77961" (UID: "e90ec082-a189-4726-8049-2151ddf77961"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.506083 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90ec082-a189-4726-8049-2151ddf77961-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e90ec082-a189-4726-8049-2151ddf77961" (UID: "e90ec082-a189-4726-8049-2151ddf77961"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.511710 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e90ec082-a189-4726-8049-2151ddf77961-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e90ec082-a189-4726-8049-2151ddf77961" (UID: "e90ec082-a189-4726-8049-2151ddf77961"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.607239 4732 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e90ec082-a189-4726-8049-2151ddf77961-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.607271 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e90ec082-a189-4726-8049-2151ddf77961-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.607284 4732 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e90ec082-a189-4726-8049-2151ddf77961-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:04 crc kubenswrapper[4732]: E0131 09:05:04.625954 4732 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: E0131 09:05:04.626460 4732 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: E0131 09:05:04.626876 4732 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: E0131 09:05:04.627088 4732 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: E0131 09:05:04.627302 4732 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:04 crc kubenswrapper[4732]: I0131 09:05:04.627330 4732 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 09:05:04 crc kubenswrapper[4732]: E0131 09:05:04.627522 4732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" interval="200ms" Jan 31 09:05:04 crc kubenswrapper[4732]: E0131 09:05:04.828362 4732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" interval="400ms" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.027349 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.028327 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.028919 4732 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.029435 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.029915 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.030169 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.030448 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.129314 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.129461 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.129473 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.129512 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.129547 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.129639 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.129951 4732 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.129969 4732 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.129980 4732 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.143985 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e90ec082-a189-4726-8049-2151ddf77961","Type":"ContainerDied","Data":"68aff39b648c97e8da52037b14095371b6089843216d06b0916075106acf4b04"} Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.144022 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68aff39b648c97e8da52037b14095371b6089843216d06b0916075106acf4b04" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.144087 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.147653 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.148031 4732 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.148445 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.148691 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.148932 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.149108 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.149961 4732 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd" exitCode=0 Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.150072 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.150865 4732 scope.go:117] "RemoveContainer" containerID="c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.163131 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.163600 4732 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.164057 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.164361 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.164656 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.164851 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7jxg" event={"ID":"a7533049-a0d8-4488-bed6-2a9b28212061","Type":"ContainerStarted","Data":"ed84a78e28087f4c77e45e5411b14a6396a5280235d5cb0b510fa269633f7cd6"} Jan 31 09:05:05 crc kubenswrapper[4732]: E0131 09:05:05.165413 4732 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.231:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.166184 4732 scope.go:117] "RemoveContainer" containerID="83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.182948 4732 scope.go:117] "RemoveContainer" containerID="52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.196159 4732 scope.go:117] "RemoveContainer" containerID="89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.208583 4732 scope.go:117] "RemoveContainer" containerID="2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.227397 4732 scope.go:117] "RemoveContainer" containerID="31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f" Jan 31 09:05:05 crc kubenswrapper[4732]: E0131 09:05:05.229956 4732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" interval="800ms" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.244074 4732 scope.go:117] "RemoveContainer" containerID="c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b" Jan 31 09:05:05 crc kubenswrapper[4732]: E0131 09:05:05.244495 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\": container with ID starting with c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b not found: ID does not exist" containerID="c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.244547 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b"} err="failed to get container status \"c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\": rpc error: code = NotFound desc = could not find container \"c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b\": container with ID starting with c276b6fdf2c47bfa38c4cf312bf6b893388c4291d306738fd6c31738cbd7483b not found: ID does not exist" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.244582 4732 scope.go:117] "RemoveContainer" containerID="83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9" Jan 31 09:05:05 crc kubenswrapper[4732]: E0131 09:05:05.244920 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\": container with ID starting with 83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9 not found: ID does not exist" containerID="83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.244966 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9"} err="failed to get container status \"83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\": rpc error: code = NotFound desc = could not find container \"83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9\": container with ID starting with 83c2048db4772c1a5f60b21340f773ab5c651045d4dac9edc15e91a19d15dce9 not found: ID does not exist" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.244993 4732 scope.go:117] "RemoveContainer" containerID="52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722" Jan 31 09:05:05 crc kubenswrapper[4732]: E0131 09:05:05.245240 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\": container with ID starting with 52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722 not found: ID does not exist" containerID="52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.245268 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722"} err="failed to get container status \"52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\": rpc error: code = NotFound desc = could not find container \"52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722\": container with ID starting with 52b8ba07937855cb1fd0e1db567f90e08848fca583b7174153918e2d6356d722 not found: ID does not exist" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.245288 4732 scope.go:117] "RemoveContainer" containerID="89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7" Jan 31 09:05:05 crc kubenswrapper[4732]: E0131 09:05:05.245471 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\": container with ID starting with 89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7 not found: ID does not exist" containerID="89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.245500 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7"} err="failed to get container status \"89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\": rpc error: code = NotFound desc = could not find container \"89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7\": container with ID starting with 89ce42a35726dd1fc6aea0eb2e5baa490de04566cd6d6298185260e896a8def7 not found: ID does not exist" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.245515 4732 scope.go:117] "RemoveContainer" containerID="2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd" Jan 31 09:05:05 crc kubenswrapper[4732]: E0131 09:05:05.245706 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\": container with ID starting with 2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd not found: ID does not exist" containerID="2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.245726 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd"} err="failed to get container status \"2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\": rpc error: code = NotFound desc = could not find container \"2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd\": container with ID starting with 2b5cbd45e357205f8bfc648396c0f684d6788343f687504feb45618d9082e5cd not found: ID does not exist" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.245741 4732 scope.go:117] "RemoveContainer" containerID="31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f" Jan 31 09:05:05 crc kubenswrapper[4732]: E0131 09:05:05.245939 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\": container with ID starting with 31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f not found: ID does not exist" containerID="31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f" Jan 31 09:05:05 crc kubenswrapper[4732]: I0131 09:05:05.245962 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f"} err="failed to get container status \"31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\": rpc error: code = NotFound desc = could not find container \"31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f\": container with ID starting with 31c705a62170aac7bfc2c3528b20d98f75257896646d26d74f7a6cafb5be7a2f not found: ID does not exist" Jan 31 09:05:06 crc kubenswrapper[4732]: E0131 09:05:06.030622 4732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" interval="1.6s" Jan 31 09:05:06 crc kubenswrapper[4732]: I0131 09:05:06.169789 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:06 crc kubenswrapper[4732]: I0131 09:05:06.170125 4732 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:06 crc kubenswrapper[4732]: I0131 09:05:06.170475 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:06 crc kubenswrapper[4732]: I0131 09:05:06.170646 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:06 crc kubenswrapper[4732]: I0131 09:05:06.170823 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:06 crc kubenswrapper[4732]: I0131 09:05:06.549045 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 09:05:07 crc kubenswrapper[4732]: E0131 09:05:07.552317 4732 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.231:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-2h57x.188fc57928be0c23 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-2h57x,UID:9039963e-96e4-4b4d-abdd-79f0429da944,APIVersion:v1,ResourceVersion:29558,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 09:05:02.745152547 +0000 UTC m=+241.051028751,LastTimestamp:2026-01-31 09:05:02.745152547 +0000 UTC m=+241.051028751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 09:05:07 crc kubenswrapper[4732]: E0131 09:05:07.632089 4732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" interval="3.2s" Jan 31 09:05:07 crc kubenswrapper[4732]: I0131 09:05:07.908048 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:05:07 crc kubenswrapper[4732]: I0131 09:05:07.908408 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:05:07 crc kubenswrapper[4732]: I0131 09:05:07.955315 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:05:07 crc kubenswrapper[4732]: I0131 09:05:07.955922 4732 status_manager.go:851] "Failed to get status for pod" podUID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" pod="openshift-marketplace/redhat-marketplace-krjtb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-krjtb\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:07 crc kubenswrapper[4732]: I0131 09:05:07.956109 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:07 crc kubenswrapper[4732]: I0131 09:05:07.956276 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:07 crc kubenswrapper[4732]: I0131 09:05:07.956413 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:07 crc kubenswrapper[4732]: I0131 09:05:07.956543 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:08 crc kubenswrapper[4732]: I0131 09:05:08.248841 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-krjtb" Jan 31 09:05:08 crc kubenswrapper[4732]: I0131 09:05:08.249701 4732 status_manager.go:851] "Failed to get status for pod" podUID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" pod="openshift-marketplace/redhat-marketplace-krjtb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-krjtb\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:08 crc kubenswrapper[4732]: I0131 09:05:08.250081 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:08 crc kubenswrapper[4732]: I0131 09:05:08.250383 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:08 crc kubenswrapper[4732]: I0131 09:05:08.250749 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:08 crc kubenswrapper[4732]: I0131 09:05:08.251133 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.111697 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.111755 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.177362 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.177918 4732 status_manager.go:851] "Failed to get status for pod" podUID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" pod="openshift-marketplace/redhat-marketplace-krjtb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-krjtb\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.178197 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.178453 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.178710 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.178986 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.225686 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2h57x" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.226242 4732 status_manager.go:851] "Failed to get status for pod" podUID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" pod="openshift-marketplace/redhat-marketplace-krjtb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-krjtb\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.226565 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.226813 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.227006 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:09 crc kubenswrapper[4732]: I0131 09:05:09.227199 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.311244 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.311590 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.351286 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.352107 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.352523 4732 status_manager.go:851] "Failed to get status for pod" podUID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" pod="openshift-marketplace/redhat-marketplace-krjtb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-krjtb\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.352844 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.353176 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.353463 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:10 crc kubenswrapper[4732]: E0131 09:05:10.832946 4732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" interval="6.4s" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.922863 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.922924 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.969760 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.970427 4732 status_manager.go:851] "Failed to get status for pod" podUID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" pod="openshift-marketplace/redhat-marketplace-krjtb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-krjtb\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.971007 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.971473 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.971855 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:10 crc kubenswrapper[4732]: I0131 09:05:10.972127 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:11 crc kubenswrapper[4732]: I0131 09:05:11.231810 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d7jxg" Jan 31 09:05:11 crc kubenswrapper[4732]: I0131 09:05:11.233150 4732 status_manager.go:851] "Failed to get status for pod" podUID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" pod="openshift-marketplace/redhat-marketplace-krjtb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-krjtb\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:11 crc kubenswrapper[4732]: I0131 09:05:11.233836 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:11 crc kubenswrapper[4732]: I0131 09:05:11.234321 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:11 crc kubenswrapper[4732]: I0131 09:05:11.234545 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:11 crc kubenswrapper[4732]: I0131 09:05:11.234788 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:11 crc kubenswrapper[4732]: I0131 09:05:11.241135 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4pkzq" Jan 31 09:05:11 crc kubenswrapper[4732]: I0131 09:05:11.241736 4732 status_manager.go:851] "Failed to get status for pod" podUID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" pod="openshift-marketplace/redhat-marketplace-krjtb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-krjtb\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:11 crc kubenswrapper[4732]: I0131 09:05:11.242153 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:11 crc kubenswrapper[4732]: I0131 09:05:11.242710 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:11 crc kubenswrapper[4732]: I0131 09:05:11.242936 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:11 crc kubenswrapper[4732]: I0131 09:05:11.243182 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:12 crc kubenswrapper[4732]: I0131 09:05:12.549873 4732 status_manager.go:851] "Failed to get status for pod" podUID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" pod="openshift-marketplace/redhat-marketplace-krjtb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-krjtb\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:12 crc kubenswrapper[4732]: I0131 09:05:12.550662 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:12 crc kubenswrapper[4732]: I0131 09:05:12.550859 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:12 crc kubenswrapper[4732]: I0131 09:05:12.551055 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:12 crc kubenswrapper[4732]: I0131 09:05:12.551200 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:12 crc kubenswrapper[4732]: E0131 09:05:12.602523 4732 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.231:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" volumeName="registry-storage" Jan 31 09:05:14 crc kubenswrapper[4732]: I0131 09:05:14.541986 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:14 crc kubenswrapper[4732]: I0131 09:05:14.542795 4732 status_manager.go:851] "Failed to get status for pod" podUID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" pod="openshift-marketplace/redhat-marketplace-krjtb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-krjtb\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:14 crc kubenswrapper[4732]: I0131 09:05:14.543356 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:14 crc kubenswrapper[4732]: I0131 09:05:14.543600 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:14 crc kubenswrapper[4732]: I0131 09:05:14.543940 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:14 crc kubenswrapper[4732]: I0131 09:05:14.544282 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:14 crc kubenswrapper[4732]: I0131 09:05:14.554492 4732 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:14 crc kubenswrapper[4732]: I0131 09:05:14.554707 4732 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:14 crc kubenswrapper[4732]: E0131 09:05:14.555119 4732 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:14 crc kubenswrapper[4732]: I0131 09:05:14.555547 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:14 crc kubenswrapper[4732]: W0131 09:05:14.577835 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-5f0d076b93198311aad7a58d038ed6fc168d5667dad1b2ef08747de7218ac717 WatchSource:0}: Error finding container 5f0d076b93198311aad7a58d038ed6fc168d5667dad1b2ef08747de7218ac717: Status 404 returned error can't find the container with id 5f0d076b93198311aad7a58d038ed6fc168d5667dad1b2ef08747de7218ac717 Jan 31 09:05:15 crc kubenswrapper[4732]: I0131 09:05:15.218099 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"82cac6b55aafc33105786700ec55dd147b36929c869fcd28d33fcea44ba044c2"} Jan 31 09:05:15 crc kubenswrapper[4732]: I0131 09:05:15.218385 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5f0d076b93198311aad7a58d038ed6fc168d5667dad1b2ef08747de7218ac717"} Jan 31 09:05:15 crc kubenswrapper[4732]: I0131 09:05:15.218655 4732 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:15 crc kubenswrapper[4732]: I0131 09:05:15.218696 4732 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:15 crc kubenswrapper[4732]: E0131 09:05:15.219101 4732 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:15 crc kubenswrapper[4732]: I0131 09:05:15.219485 4732 status_manager.go:851] "Failed to get status for pod" podUID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" pod="openshift-marketplace/redhat-marketplace-krjtb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-krjtb\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:15 crc kubenswrapper[4732]: I0131 09:05:15.219893 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:15 crc kubenswrapper[4732]: I0131 09:05:15.220133 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:15 crc kubenswrapper[4732]: I0131 09:05:15.220546 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:15 crc kubenswrapper[4732]: I0131 09:05:15.220914 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:15 crc kubenswrapper[4732]: E0131 09:05:15.306075 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:05:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:05:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:05:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T09:05:15Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:15 crc kubenswrapper[4732]: E0131 09:05:15.306396 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:15 crc kubenswrapper[4732]: E0131 09:05:15.306819 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:15 crc kubenswrapper[4732]: E0131 09:05:15.307153 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:15 crc kubenswrapper[4732]: E0131 09:05:15.307365 4732 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:15 crc kubenswrapper[4732]: E0131 09:05:15.307384 4732 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.229980 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.230188 4732 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da" exitCode=1 Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.230253 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da"} Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.230741 4732 scope.go:117] "RemoveContainer" containerID="bd0e75b86e540472dde92c0b2c29d3af05e052a4e6a01fd03cd16cce663bd5da" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.231106 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.231458 4732 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.232142 4732 status_manager.go:851] "Failed to get status for pod" podUID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" pod="openshift-marketplace/redhat-marketplace-krjtb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-krjtb\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.232277 4732 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="82cac6b55aafc33105786700ec55dd147b36929c869fcd28d33fcea44ba044c2" exitCode=0 Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.232315 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"82cac6b55aafc33105786700ec55dd147b36929c869fcd28d33fcea44ba044c2"} Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.232631 4732 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.232645 4732 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.232639 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4732]: E0131 09:05:17.233163 4732 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.233350 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4732]: E0131 09:05:17.233376 4732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.231:6443: connect: connection refused" interval="7s" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.233581 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.233976 4732 status_manager.go:851] "Failed to get status for pod" podUID="9039963e-96e4-4b4d-abdd-79f0429da944" pod="openshift-marketplace/certified-operators-2h57x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2h57x\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.234272 4732 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.234528 4732 status_manager.go:851] "Failed to get status for pod" podUID="17e07aee-c4b1-4011-8442-c6dcfc4f415c" pod="openshift-marketplace/redhat-marketplace-krjtb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-krjtb\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.234946 4732 status_manager.go:851] "Failed to get status for pod" podUID="a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45" pod="openshift-marketplace/redhat-operators-4pkzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4pkzq\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.235519 4732 status_manager.go:851] "Failed to get status for pod" podUID="a7533049-a0d8-4488-bed6-2a9b28212061" pod="openshift-marketplace/community-operators-d7jxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-d7jxg\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4732]: I0131 09:05:17.235792 4732 status_manager.go:851] "Failed to get status for pod" podUID="e90ec082-a189-4726-8049-2151ddf77961" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.231:6443: connect: connection refused" Jan 31 09:05:17 crc kubenswrapper[4732]: E0131 09:05:17.554515 4732 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.231:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-2h57x.188fc57928be0c23 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-2h57x,UID:9039963e-96e4-4b4d-abdd-79f0429da944,APIVersion:v1,ResourceVersion:29558,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 09:05:02.745152547 +0000 UTC m=+241.051028751,LastTimestamp:2026-01-31 09:05:02.745152547 +0000 UTC m=+241.051028751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 09:05:18 crc kubenswrapper[4732]: I0131 09:05:18.242389 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 09:05:18 crc kubenswrapper[4732]: I0131 09:05:18.242754 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7fdf2f67b19cba3eae0f65f0c9c411a80b2a5af4c8acb388e2f13c8592f6ffc0"} Jan 31 09:05:18 crc kubenswrapper[4732]: I0131 09:05:18.248045 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bd21518371c5cfdf3ab91a35559abe0750cb66b8e231b7a6790545c4942f8f64"} Jan 31 09:05:18 crc kubenswrapper[4732]: I0131 09:05:18.248091 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"45b29ade00c36c5d91f59c065a1aedd808bbe04ca406dbafe4230c1ce34fe2e6"} Jan 31 09:05:18 crc kubenswrapper[4732]: I0131 09:05:18.248103 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"64cebec0077595628afd3c3e4f337629c538cd16f474cff3fb36cecda7b7bc19"} Jan 31 09:05:18 crc kubenswrapper[4732]: I0131 09:05:18.248113 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7147c0044f73e00e244e10e3177d5c1c98793fab74c39c66a8efce3ace5241d0"} Jan 31 09:05:19 crc kubenswrapper[4732]: I0131 09:05:19.256551 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3a045bd639964cacd4c1b25c1363df22d1f5be0da4646448a7cb3f70b8d25077"} Jan 31 09:05:19 crc kubenswrapper[4732]: I0131 09:05:19.256902 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:19 crc kubenswrapper[4732]: I0131 09:05:19.256808 4732 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:19 crc kubenswrapper[4732]: I0131 09:05:19.256926 4732 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:19 crc kubenswrapper[4732]: I0131 09:05:19.555815 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:19 crc kubenswrapper[4732]: I0131 09:05:19.555947 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:19 crc kubenswrapper[4732]: I0131 09:05:19.561940 4732 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]log ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]etcd ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/priority-and-fairness-filter ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/start-apiextensions-informers ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/start-apiextensions-controllers ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/crd-informer-synced ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/start-system-namespaces-controller ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 31 09:05:19 crc kubenswrapper[4732]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 31 09:05:19 crc kubenswrapper[4732]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/bootstrap-controller ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/start-kube-aggregator-informers ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/apiservice-registration-controller ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/apiservice-discovery-controller ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]autoregister-completion ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/apiservice-openapi-controller ok Jan 31 09:05:19 crc kubenswrapper[4732]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 31 09:05:19 crc kubenswrapper[4732]: livez check failed Jan 31 09:05:19 crc kubenswrapper[4732]: I0131 09:05:19.562013 4732 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.137190 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" podUID="c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" containerName="oauth-openshift" containerID="cri-o://9615c331134f8617d35092f6d2eb0dd4c5eead219bbc1b139774acc1bdb42b9b" gracePeriod=15 Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.263743 4732 generic.go:334] "Generic (PLEG): container finished" podID="c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" containerID="9615c331134f8617d35092f6d2eb0dd4c5eead219bbc1b139774acc1bdb42b9b" exitCode=0 Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.263830 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" event={"ID":"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20","Type":"ContainerDied","Data":"9615c331134f8617d35092f6d2eb0dd4c5eead219bbc1b139774acc1bdb42b9b"} Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.492940 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522270 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-audit-policies\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522333 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-audit-dir\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522392 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-login\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522416 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-session\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522446 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lsmw\" (UniqueName: \"kubernetes.io/projected/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-kube-api-access-9lsmw\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522460 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-idp-0-file-data\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522478 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-ocp-branding-template\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522515 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-router-certs\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522538 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-serving-cert\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522553 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-service-ca\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522574 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-cliconfig\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522594 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-provider-selection\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522623 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-error\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.522648 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-trusted-ca-bundle\") pod \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\" (UID: \"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20\") " Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.523860 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.523983 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.524953 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.526062 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.526229 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.529940 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.530269 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.530288 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-kube-api-access-9lsmw" (OuterVolumeSpecName: "kube-api-access-9lsmw") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "kube-api-access-9lsmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.530446 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.532740 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.533272 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.536887 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.537222 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.537620 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" (UID: "c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.623937 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.623994 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.624011 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lsmw\" (UniqueName: \"kubernetes.io/projected/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-kube-api-access-9lsmw\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.624022 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.624037 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.624049 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.624084 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.624097 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.624108 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.624120 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.624131 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.624161 4732 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.624173 4732 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:20 crc kubenswrapper[4732]: I0131 09:05:20.624184 4732 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:21 crc kubenswrapper[4732]: I0131 09:05:21.273868 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" event={"ID":"c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20","Type":"ContainerDied","Data":"bf0aacb740607afdcd33e43432dcaec43c8aa3d7707aec7cab5cbf845309020a"} Jan 31 09:05:21 crc kubenswrapper[4732]: I0131 09:05:21.274288 4732 scope.go:117] "RemoveContainer" containerID="9615c331134f8617d35092f6d2eb0dd4c5eead219bbc1b139774acc1bdb42b9b" Jan 31 09:05:21 crc kubenswrapper[4732]: I0131 09:05:21.273995 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c8t6l" Jan 31 09:05:22 crc kubenswrapper[4732]: I0131 09:05:22.151458 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:05:23 crc kubenswrapper[4732]: I0131 09:05:23.380403 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:05:23 crc kubenswrapper[4732]: I0131 09:05:23.386884 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:05:24 crc kubenswrapper[4732]: I0131 09:05:24.276068 4732 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:24 crc kubenswrapper[4732]: I0131 09:05:24.299810 4732 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:24 crc kubenswrapper[4732]: I0131 09:05:24.299848 4732 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:24 crc kubenswrapper[4732]: I0131 09:05:24.561324 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:24 crc kubenswrapper[4732]: I0131 09:05:24.563429 4732 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7ba3697b-fd3c-4270-bb59-3408ba7ace54" Jan 31 09:05:25 crc kubenswrapper[4732]: I0131 09:05:25.305589 4732 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:25 crc kubenswrapper[4732]: I0131 09:05:25.305837 4732 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:25 crc kubenswrapper[4732]: I0131 09:05:25.310034 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:26 crc kubenswrapper[4732]: I0131 09:05:26.312464 4732 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:26 crc kubenswrapper[4732]: I0131 09:05:26.312507 4732 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1dc686f-fa2d-4cd4-95d4-874df2dc3a1c" Jan 31 09:05:32 crc kubenswrapper[4732]: I0131 09:05:32.155547 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 09:05:32 crc kubenswrapper[4732]: I0131 09:05:32.569122 4732 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7ba3697b-fd3c-4270-bb59-3408ba7ace54" Jan 31 09:05:34 crc kubenswrapper[4732]: I0131 09:05:34.271072 4732 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 09:05:34 crc kubenswrapper[4732]: I0131 09:05:34.364311 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 09:05:34 crc kubenswrapper[4732]: I0131 09:05:34.540241 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 09:05:34 crc kubenswrapper[4732]: I0131 09:05:34.947386 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 09:05:34 crc kubenswrapper[4732]: I0131 09:05:34.957774 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 09:05:35 crc kubenswrapper[4732]: I0131 09:05:35.059528 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 09:05:35 crc kubenswrapper[4732]: I0131 09:05:35.315272 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 09:05:35 crc kubenswrapper[4732]: I0131 09:05:35.367175 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 09:05:35 crc kubenswrapper[4732]: I0131 09:05:35.445688 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 09:05:35 crc kubenswrapper[4732]: I0131 09:05:35.479059 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 09:05:35 crc kubenswrapper[4732]: I0131 09:05:35.515374 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 09:05:35 crc kubenswrapper[4732]: I0131 09:05:35.533291 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 09:05:35 crc kubenswrapper[4732]: I0131 09:05:35.702421 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 09:05:35 crc kubenswrapper[4732]: I0131 09:05:35.766380 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 09:05:35 crc kubenswrapper[4732]: I0131 09:05:35.861265 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.017189 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.089439 4732 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.090448 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4pkzq" podStartSLOduration=34.496990382 podStartE2EDuration="37.090417845s" podCreationTimestamp="2026-01-31 09:04:59 +0000 UTC" firstStartedPulling="2026-01-31 09:05:01.070175455 +0000 UTC m=+239.376051659" lastFinishedPulling="2026-01-31 09:05:03.663602918 +0000 UTC m=+241.969479122" observedRunningTime="2026-01-31 09:05:24.082444832 +0000 UTC m=+262.388321036" watchObservedRunningTime="2026-01-31 09:05:36.090417845 +0000 UTC m=+274.396294099" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.097813 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2h57x" podStartSLOduration=35.616753428 podStartE2EDuration="38.097786709s" podCreationTimestamp="2026-01-31 09:04:58 +0000 UTC" firstStartedPulling="2026-01-31 09:05:00.05140363 +0000 UTC m=+238.357279844" lastFinishedPulling="2026-01-31 09:05:02.532436921 +0000 UTC m=+240.838313125" observedRunningTime="2026-01-31 09:05:24.125639965 +0000 UTC m=+262.431516169" watchObservedRunningTime="2026-01-31 09:05:36.097786709 +0000 UTC m=+274.403663013" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.098710 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d7jxg" podStartSLOduration=33.274977029 podStartE2EDuration="36.098696449s" podCreationTimestamp="2026-01-31 09:05:00 +0000 UTC" firstStartedPulling="2026-01-31 09:05:02.089645295 +0000 UTC m=+240.395521499" lastFinishedPulling="2026-01-31 09:05:04.913364715 +0000 UTC m=+243.219240919" observedRunningTime="2026-01-31 09:05:24.095880977 +0000 UTC m=+262.401757181" watchObservedRunningTime="2026-01-31 09:05:36.098696449 +0000 UTC m=+274.404572703" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.100156 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-c8t6l"] Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.100259 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.105631 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.121897 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.121882888 podStartE2EDuration="12.121882888s" podCreationTimestamp="2026-01-31 09:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:05:36.119174179 +0000 UTC m=+274.425050393" watchObservedRunningTime="2026-01-31 09:05:36.121882888 +0000 UTC m=+274.427759092" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.130235 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.189733 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.209310 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.290107 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.387123 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.465339 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.551306 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" path="/var/lib/kubelet/pods/c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20/volumes" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.686406 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.710286 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.798771 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 09:05:36 crc kubenswrapper[4732]: I0131 09:05:36.966362 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 09:05:37 crc kubenswrapper[4732]: I0131 09:05:37.235892 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 09:05:37 crc kubenswrapper[4732]: I0131 09:05:37.266356 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 09:05:37 crc kubenswrapper[4732]: I0131 09:05:37.295956 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 09:05:37 crc kubenswrapper[4732]: I0131 09:05:37.328384 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 09:05:37 crc kubenswrapper[4732]: I0131 09:05:37.360064 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 09:05:37 crc kubenswrapper[4732]: I0131 09:05:37.388867 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 09:05:37 crc kubenswrapper[4732]: I0131 09:05:37.408720 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 09:05:37 crc kubenswrapper[4732]: I0131 09:05:37.646793 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 09:05:37 crc kubenswrapper[4732]: I0131 09:05:37.766431 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 09:05:37 crc kubenswrapper[4732]: I0131 09:05:37.823031 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.068069 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.081260 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.212558 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.306656 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.383578 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.432129 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.455992 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.466023 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.587454 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.767308 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.792427 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.878011 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.914377 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 09:05:38 crc kubenswrapper[4732]: I0131 09:05:38.957203 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 09:05:39 crc kubenswrapper[4732]: I0131 09:05:39.001126 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 09:05:39 crc kubenswrapper[4732]: I0131 09:05:39.086176 4732 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 09:05:39 crc kubenswrapper[4732]: I0131 09:05:39.097438 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 09:05:39 crc kubenswrapper[4732]: I0131 09:05:39.110642 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 09:05:39 crc kubenswrapper[4732]: I0131 09:05:39.380087 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 09:05:39 crc kubenswrapper[4732]: I0131 09:05:39.454273 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 09:05:39 crc kubenswrapper[4732]: I0131 09:05:39.488112 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 09:05:39 crc kubenswrapper[4732]: I0131 09:05:39.570560 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 09:05:39 crc kubenswrapper[4732]: I0131 09:05:39.686724 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 09:05:39 crc kubenswrapper[4732]: I0131 09:05:39.853196 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 09:05:39 crc kubenswrapper[4732]: I0131 09:05:39.855416 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 09:05:39 crc kubenswrapper[4732]: I0131 09:05:39.875350 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.084790 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.208176 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.253217 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.413622 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.424078 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.442109 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.542621 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.580608 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.668483 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.685235 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.758347 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.779455 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.799415 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.848042 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.881451 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.909482 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 09:05:40 crc kubenswrapper[4732]: I0131 09:05:40.936333 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.009329 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.016738 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.037353 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.103767 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.178331 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.212084 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.220956 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.344703 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.380108 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.381055 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.387290 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.492946 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.594168 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.626906 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.631798 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.713721 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.725278 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.849310 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.932171 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 09:05:41 crc kubenswrapper[4732]: I0131 09:05:41.996587 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.030632 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.063528 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.068679 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.131721 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.145117 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.185783 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.272986 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.310625 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.326651 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.338262 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.421954 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.515861 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.608529 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.684784 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.687375 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.698613 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.892313 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.907100 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.935784 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.954253 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.984882 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 09:05:42 crc kubenswrapper[4732]: I0131 09:05:42.992965 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.038871 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.087571 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.126075 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.145978 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.167823 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.175320 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.267773 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.352888 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.362161 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.416372 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.460203 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.583524 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.683442 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.768811 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 09:05:43 crc kubenswrapper[4732]: I0131 09:05:43.872039 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.003682 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.024967 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.050878 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.076160 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-68974c876c-56m92"] Jan 31 09:05:44 crc kubenswrapper[4732]: E0131 09:05:44.076353 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e90ec082-a189-4726-8049-2151ddf77961" containerName="installer" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.076363 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90ec082-a189-4726-8049-2151ddf77961" containerName="installer" Jan 31 09:05:44 crc kubenswrapper[4732]: E0131 09:05:44.076387 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" containerName="oauth-openshift" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.076393 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" containerName="oauth-openshift" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.076484 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="e90ec082-a189-4726-8049-2151ddf77961" containerName="installer" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.076494 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c9fc07-9bb2-41d5-a08e-2ea5a441cd20" containerName="oauth-openshift" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.076863 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.078598 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.079136 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.079332 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.081146 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.083883 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.084087 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.084104 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.085127 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.085187 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.085605 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.085679 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.085695 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.086089 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.093588 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.106642 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.109866 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.113006 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-user-template-error\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.113187 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-user-template-login\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.113291 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.113404 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.113491 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkfzv\" (UniqueName: \"kubernetes.io/projected/7f741d28-9c76-4a05-8771-f8f448ee9a2a-kube-api-access-bkfzv\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.113603 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f741d28-9c76-4a05-8771-f8f448ee9a2a-audit-policies\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.113748 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.113846 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.113974 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.114076 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f741d28-9c76-4a05-8771-f8f448ee9a2a-audit-dir\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.114164 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-router-certs\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.114254 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-session\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.114355 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-service-ca\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.114450 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.160538 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.215556 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.216057 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.216176 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkfzv\" (UniqueName: \"kubernetes.io/projected/7f741d28-9c76-4a05-8771-f8f448ee9a2a-kube-api-access-bkfzv\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.216553 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.217338 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f741d28-9c76-4a05-8771-f8f448ee9a2a-audit-policies\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.217461 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.217583 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.217722 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f741d28-9c76-4a05-8771-f8f448ee9a2a-audit-dir\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.217845 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-router-certs\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.217964 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-session\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.218052 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-service-ca\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.218162 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.217208 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.217983 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.217988 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f741d28-9c76-4a05-8771-f8f448ee9a2a-audit-dir\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.217905 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f741d28-9c76-4a05-8771-f8f448ee9a2a-audit-policies\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.218585 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-user-template-error\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.218711 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-user-template-login\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.219064 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-service-ca\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.221779 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-router-certs\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.221782 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-session\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.222856 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.223581 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.223843 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.224984 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-user-template-error\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.225567 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-user-template-login\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.231885 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f741d28-9c76-4a05-8771-f8f448ee9a2a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.239838 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkfzv\" (UniqueName: \"kubernetes.io/projected/7f741d28-9c76-4a05-8771-f8f448ee9a2a-kube-api-access-bkfzv\") pod \"oauth-openshift-68974c876c-56m92\" (UID: \"7f741d28-9c76-4a05-8771-f8f448ee9a2a\") " pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.263938 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.302479 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.303234 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.345933 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.395023 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.438273 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.466860 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.650438 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.710757 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.772521 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.827301 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 09:05:44 crc kubenswrapper[4732]: I0131 09:05:44.965084 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.038730 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.165911 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.185863 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.340061 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.357076 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.474783 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.508091 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.570830 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.633056 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.666604 4732 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.694144 4732 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.702678 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.715135 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.734910 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.783057 4732 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.786456 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.787550 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.793855 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.800458 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.936358 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 09:05:45 crc kubenswrapper[4732]: I0131 09:05:45.993997 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.137115 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.187068 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.209429 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.209688 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.335926 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.358517 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.363737 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.394065 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.410526 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.426769 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.618990 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.661757 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.687308 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.737639 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.759264 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.765082 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.778918 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.784621 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.795438 4732 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.795687 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://de7e77b83a8e93a3b4873676b3b8bcdbab14d5437e495e34e69b839aa521fac3" gracePeriod=5 Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.881239 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 09:05:46 crc kubenswrapper[4732]: I0131 09:05:46.981423 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.022522 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.026569 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.105231 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.189911 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.191550 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.240084 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.275117 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.278694 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.479853 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.632137 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.669378 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.682840 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.698007 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.756941 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.785065 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.813651 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.842170 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.864863 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.866608 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 09:05:47 crc kubenswrapper[4732]: I0131 09:05:47.990436 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 09:05:48 crc kubenswrapper[4732]: I0131 09:05:48.100974 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 09:05:48 crc kubenswrapper[4732]: I0131 09:05:48.197924 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 09:05:48 crc kubenswrapper[4732]: I0131 09:05:48.333616 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 09:05:48 crc kubenswrapper[4732]: I0131 09:05:48.388247 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 09:05:48 crc kubenswrapper[4732]: I0131 09:05:48.446302 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 09:05:48 crc kubenswrapper[4732]: I0131 09:05:48.472818 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 09:05:48 crc kubenswrapper[4732]: I0131 09:05:48.475381 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 09:05:48 crc kubenswrapper[4732]: I0131 09:05:48.524735 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 09:05:48 crc kubenswrapper[4732]: I0131 09:05:48.638842 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 09:05:48 crc kubenswrapper[4732]: I0131 09:05:48.863145 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 09:05:49 crc kubenswrapper[4732]: I0131 09:05:49.083735 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 09:05:49 crc kubenswrapper[4732]: I0131 09:05:49.166767 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 09:05:49 crc kubenswrapper[4732]: I0131 09:05:49.235291 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 09:05:49 crc kubenswrapper[4732]: I0131 09:05:49.265404 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 09:05:49 crc kubenswrapper[4732]: I0131 09:05:49.356300 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 09:05:49 crc kubenswrapper[4732]: I0131 09:05:49.680050 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 09:05:49 crc kubenswrapper[4732]: I0131 09:05:49.912679 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 09:05:50 crc kubenswrapper[4732]: I0131 09:05:50.363551 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 09:05:50 crc kubenswrapper[4732]: I0131 09:05:50.758313 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 09:05:50 crc kubenswrapper[4732]: I0131 09:05:50.791898 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68974c876c-56m92"] Jan 31 09:05:50 crc kubenswrapper[4732]: I0131 09:05:50.848236 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 09:05:51 crc kubenswrapper[4732]: I0131 09:05:51.236131 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68974c876c-56m92"] Jan 31 09:05:51 crc kubenswrapper[4732]: I0131 09:05:51.458014 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68974c876c-56m92" event={"ID":"7f741d28-9c76-4a05-8771-f8f448ee9a2a","Type":"ContainerStarted","Data":"47b793847a3c4466d52fbc49cdd13780012ab2610d79b2abe326b626e2b5f9ff"} Jan 31 09:05:51 crc kubenswrapper[4732]: I0131 09:05:51.561528 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 09:05:51 crc kubenswrapper[4732]: I0131 09:05:51.584808 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 09:05:51 crc kubenswrapper[4732]: I0131 09:05:51.931545 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 09:05:51 crc kubenswrapper[4732]: I0131 09:05:51.931645 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.015556 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.015689 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.015705 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.015729 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.015826 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.015888 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.015933 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.015952 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.016072 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.016180 4732 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.016192 4732 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.016202 4732 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.016213 4732 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.023099 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.117791 4732 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.285433 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.466433 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.466502 4732 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="de7e77b83a8e93a3b4873676b3b8bcdbab14d5437e495e34e69b839aa521fac3" exitCode=137 Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.466583 4732 scope.go:117] "RemoveContainer" containerID="de7e77b83a8e93a3b4873676b3b8bcdbab14d5437e495e34e69b839aa521fac3" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.466624 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.468171 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68974c876c-56m92" event={"ID":"7f741d28-9c76-4a05-8771-f8f448ee9a2a","Type":"ContainerStarted","Data":"124175dc8a5a5e72202c80a37623e943ecd794550cce58db3a9ae7215e5bba55"} Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.468614 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.477601 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-68974c876c-56m92" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.485930 4732 scope.go:117] "RemoveContainer" containerID="de7e77b83a8e93a3b4873676b3b8bcdbab14d5437e495e34e69b839aa521fac3" Jan 31 09:05:52 crc kubenswrapper[4732]: E0131 09:05:52.486259 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de7e77b83a8e93a3b4873676b3b8bcdbab14d5437e495e34e69b839aa521fac3\": container with ID starting with de7e77b83a8e93a3b4873676b3b8bcdbab14d5437e495e34e69b839aa521fac3 not found: ID does not exist" containerID="de7e77b83a8e93a3b4873676b3b8bcdbab14d5437e495e34e69b839aa521fac3" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.486292 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de7e77b83a8e93a3b4873676b3b8bcdbab14d5437e495e34e69b839aa521fac3"} err="failed to get container status \"de7e77b83a8e93a3b4873676b3b8bcdbab14d5437e495e34e69b839aa521fac3\": rpc error: code = NotFound desc = could not find container \"de7e77b83a8e93a3b4873676b3b8bcdbab14d5437e495e34e69b839aa521fac3\": container with ID starting with de7e77b83a8e93a3b4873676b3b8bcdbab14d5437e495e34e69b839aa521fac3 not found: ID does not exist" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.504815 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-68974c876c-56m92" podStartSLOduration=57.504795343 podStartE2EDuration="57.504795343s" podCreationTimestamp="2026-01-31 09:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:05:52.50140876 +0000 UTC m=+290.807284964" watchObservedRunningTime="2026-01-31 09:05:52.504795343 +0000 UTC m=+290.810671547" Jan 31 09:05:52 crc kubenswrapper[4732]: I0131 09:05:52.554215 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 09:06:02 crc kubenswrapper[4732]: I0131 09:06:02.271719 4732 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.109728 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tg4xc"] Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.111206 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" podUID="219a04b6-e7bd-4138-bcc7-4f650537aa24" containerName="controller-manager" containerID="cri-o://45d9723ae1f9063bad99b0cd5b3c8cb9365e80583327ef29b13ddfa585f2a481" gracePeriod=30 Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.211997 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp"] Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.212520 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" podUID="541ea3c2-891c-4c3e-81fd-9d340112c62b" containerName="route-controller-manager" containerID="cri-o://e821c09a3ccb6a40c23e0f74bb9209a294da276e26d710a28decd86bc0d3c274" gracePeriod=30 Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.465417 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.519956 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-client-ca\") pod \"219a04b6-e7bd-4138-bcc7-4f650537aa24\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.520027 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-config\") pod \"219a04b6-e7bd-4138-bcc7-4f650537aa24\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.520066 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bws5c\" (UniqueName: \"kubernetes.io/projected/219a04b6-e7bd-4138-bcc7-4f650537aa24-kube-api-access-bws5c\") pod \"219a04b6-e7bd-4138-bcc7-4f650537aa24\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.520100 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-proxy-ca-bundles\") pod \"219a04b6-e7bd-4138-bcc7-4f650537aa24\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.520121 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/219a04b6-e7bd-4138-bcc7-4f650537aa24-serving-cert\") pod \"219a04b6-e7bd-4138-bcc7-4f650537aa24\" (UID: \"219a04b6-e7bd-4138-bcc7-4f650537aa24\") " Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.521504 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-client-ca" (OuterVolumeSpecName: "client-ca") pod "219a04b6-e7bd-4138-bcc7-4f650537aa24" (UID: "219a04b6-e7bd-4138-bcc7-4f650537aa24"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.521904 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "219a04b6-e7bd-4138-bcc7-4f650537aa24" (UID: "219a04b6-e7bd-4138-bcc7-4f650537aa24"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.522082 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-config" (OuterVolumeSpecName: "config") pod "219a04b6-e7bd-4138-bcc7-4f650537aa24" (UID: "219a04b6-e7bd-4138-bcc7-4f650537aa24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.526296 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219a04b6-e7bd-4138-bcc7-4f650537aa24-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "219a04b6-e7bd-4138-bcc7-4f650537aa24" (UID: "219a04b6-e7bd-4138-bcc7-4f650537aa24"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.526447 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219a04b6-e7bd-4138-bcc7-4f650537aa24-kube-api-access-bws5c" (OuterVolumeSpecName: "kube-api-access-bws5c") pod "219a04b6-e7bd-4138-bcc7-4f650537aa24" (UID: "219a04b6-e7bd-4138-bcc7-4f650537aa24"). InnerVolumeSpecName "kube-api-access-bws5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.540039 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.553101 4732 generic.go:334] "Generic (PLEG): container finished" podID="541ea3c2-891c-4c3e-81fd-9d340112c62b" containerID="e821c09a3ccb6a40c23e0f74bb9209a294da276e26d710a28decd86bc0d3c274" exitCode=0 Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.553201 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" event={"ID":"541ea3c2-891c-4c3e-81fd-9d340112c62b","Type":"ContainerDied","Data":"e821c09a3ccb6a40c23e0f74bb9209a294da276e26d710a28decd86bc0d3c274"} Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.553231 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" event={"ID":"541ea3c2-891c-4c3e-81fd-9d340112c62b","Type":"ContainerDied","Data":"2ecd69d7eb9bd214f9aff82913ddef7756830d15321e64e3fd947685beb0e5f0"} Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.553250 4732 scope.go:117] "RemoveContainer" containerID="e821c09a3ccb6a40c23e0f74bb9209a294da276e26d710a28decd86bc0d3c274" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.553339 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.563053 4732 generic.go:334] "Generic (PLEG): container finished" podID="219a04b6-e7bd-4138-bcc7-4f650537aa24" containerID="45d9723ae1f9063bad99b0cd5b3c8cb9365e80583327ef29b13ddfa585f2a481" exitCode=0 Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.563106 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" event={"ID":"219a04b6-e7bd-4138-bcc7-4f650537aa24","Type":"ContainerDied","Data":"45d9723ae1f9063bad99b0cd5b3c8cb9365e80583327ef29b13ddfa585f2a481"} Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.563137 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" event={"ID":"219a04b6-e7bd-4138-bcc7-4f650537aa24","Type":"ContainerDied","Data":"371f900003f95ceb406307c3e3064fc3164d972cb97c8e25812cb0e57334cb37"} Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.563194 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tg4xc" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.583147 4732 scope.go:117] "RemoveContainer" containerID="e821c09a3ccb6a40c23e0f74bb9209a294da276e26d710a28decd86bc0d3c274" Jan 31 09:06:07 crc kubenswrapper[4732]: E0131 09:06:07.583974 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e821c09a3ccb6a40c23e0f74bb9209a294da276e26d710a28decd86bc0d3c274\": container with ID starting with e821c09a3ccb6a40c23e0f74bb9209a294da276e26d710a28decd86bc0d3c274 not found: ID does not exist" containerID="e821c09a3ccb6a40c23e0f74bb9209a294da276e26d710a28decd86bc0d3c274" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.584006 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e821c09a3ccb6a40c23e0f74bb9209a294da276e26d710a28decd86bc0d3c274"} err="failed to get container status \"e821c09a3ccb6a40c23e0f74bb9209a294da276e26d710a28decd86bc0d3c274\": rpc error: code = NotFound desc = could not find container \"e821c09a3ccb6a40c23e0f74bb9209a294da276e26d710a28decd86bc0d3c274\": container with ID starting with e821c09a3ccb6a40c23e0f74bb9209a294da276e26d710a28decd86bc0d3c274 not found: ID does not exist" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.584023 4732 scope.go:117] "RemoveContainer" containerID="45d9723ae1f9063bad99b0cd5b3c8cb9365e80583327ef29b13ddfa585f2a481" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.593428 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tg4xc"] Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.602632 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tg4xc"] Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.611420 4732 scope.go:117] "RemoveContainer" containerID="45d9723ae1f9063bad99b0cd5b3c8cb9365e80583327ef29b13ddfa585f2a481" Jan 31 09:06:07 crc kubenswrapper[4732]: E0131 09:06:07.611868 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d9723ae1f9063bad99b0cd5b3c8cb9365e80583327ef29b13ddfa585f2a481\": container with ID starting with 45d9723ae1f9063bad99b0cd5b3c8cb9365e80583327ef29b13ddfa585f2a481 not found: ID does not exist" containerID="45d9723ae1f9063bad99b0cd5b3c8cb9365e80583327ef29b13ddfa585f2a481" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.611926 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d9723ae1f9063bad99b0cd5b3c8cb9365e80583327ef29b13ddfa585f2a481"} err="failed to get container status \"45d9723ae1f9063bad99b0cd5b3c8cb9365e80583327ef29b13ddfa585f2a481\": rpc error: code = NotFound desc = could not find container \"45d9723ae1f9063bad99b0cd5b3c8cb9365e80583327ef29b13ddfa585f2a481\": container with ID starting with 45d9723ae1f9063bad99b0cd5b3c8cb9365e80583327ef29b13ddfa585f2a481 not found: ID does not exist" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.621344 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541ea3c2-891c-4c3e-81fd-9d340112c62b-config\") pod \"541ea3c2-891c-4c3e-81fd-9d340112c62b\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.621400 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541ea3c2-891c-4c3e-81fd-9d340112c62b-serving-cert\") pod \"541ea3c2-891c-4c3e-81fd-9d340112c62b\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.621462 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhhgt\" (UniqueName: \"kubernetes.io/projected/541ea3c2-891c-4c3e-81fd-9d340112c62b-kube-api-access-jhhgt\") pod \"541ea3c2-891c-4c3e-81fd-9d340112c62b\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.621495 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/541ea3c2-891c-4c3e-81fd-9d340112c62b-client-ca\") pod \"541ea3c2-891c-4c3e-81fd-9d340112c62b\" (UID: \"541ea3c2-891c-4c3e-81fd-9d340112c62b\") " Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.621812 4732 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.621825 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.621834 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bws5c\" (UniqueName: \"kubernetes.io/projected/219a04b6-e7bd-4138-bcc7-4f650537aa24-kube-api-access-bws5c\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.621846 4732 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/219a04b6-e7bd-4138-bcc7-4f650537aa24-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.621855 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/219a04b6-e7bd-4138-bcc7-4f650537aa24-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.622814 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541ea3c2-891c-4c3e-81fd-9d340112c62b-client-ca" (OuterVolumeSpecName: "client-ca") pod "541ea3c2-891c-4c3e-81fd-9d340112c62b" (UID: "541ea3c2-891c-4c3e-81fd-9d340112c62b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.624364 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541ea3c2-891c-4c3e-81fd-9d340112c62b-config" (OuterVolumeSpecName: "config") pod "541ea3c2-891c-4c3e-81fd-9d340112c62b" (UID: "541ea3c2-891c-4c3e-81fd-9d340112c62b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.625435 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541ea3c2-891c-4c3e-81fd-9d340112c62b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "541ea3c2-891c-4c3e-81fd-9d340112c62b" (UID: "541ea3c2-891c-4c3e-81fd-9d340112c62b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.627245 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541ea3c2-891c-4c3e-81fd-9d340112c62b-kube-api-access-jhhgt" (OuterVolumeSpecName: "kube-api-access-jhhgt") pod "541ea3c2-891c-4c3e-81fd-9d340112c62b" (UID: "541ea3c2-891c-4c3e-81fd-9d340112c62b"). InnerVolumeSpecName "kube-api-access-jhhgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.723341 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541ea3c2-891c-4c3e-81fd-9d340112c62b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.723432 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhhgt\" (UniqueName: \"kubernetes.io/projected/541ea3c2-891c-4c3e-81fd-9d340112c62b-kube-api-access-jhhgt\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.723460 4732 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/541ea3c2-891c-4c3e-81fd-9d340112c62b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.723479 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541ea3c2-891c-4c3e-81fd-9d340112c62b-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.900381 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp"] Jan 31 09:06:07 crc kubenswrapper[4732]: I0131 09:06:07.907464 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5s6dp"] Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.551709 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="219a04b6-e7bd-4138-bcc7-4f650537aa24" path="/var/lib/kubelet/pods/219a04b6-e7bd-4138-bcc7-4f650537aa24/volumes" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.552415 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="541ea3c2-891c-4c3e-81fd-9d340112c62b" path="/var/lib/kubelet/pods/541ea3c2-891c-4c3e-81fd-9d340112c62b/volumes" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.635509 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k"] Jan 31 09:06:08 crc kubenswrapper[4732]: E0131 09:06:08.635953 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.635981 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 09:06:08 crc kubenswrapper[4732]: E0131 09:06:08.635994 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541ea3c2-891c-4c3e-81fd-9d340112c62b" containerName="route-controller-manager" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.636005 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="541ea3c2-891c-4c3e-81fd-9d340112c62b" containerName="route-controller-manager" Jan 31 09:06:08 crc kubenswrapper[4732]: E0131 09:06:08.636017 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219a04b6-e7bd-4138-bcc7-4f650537aa24" containerName="controller-manager" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.636172 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="219a04b6-e7bd-4138-bcc7-4f650537aa24" containerName="controller-manager" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.636305 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.636328 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="219a04b6-e7bd-4138-bcc7-4f650537aa24" containerName="controller-manager" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.636342 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="541ea3c2-891c-4c3e-81fd-9d340112c62b" containerName="route-controller-manager" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.636920 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.640421 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.640881 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.641211 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.641627 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.641886 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.642918 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr"] Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.643316 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.643923 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.647060 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.647375 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.647826 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.648060 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.648142 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.648489 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.650600 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k"] Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.653133 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr"] Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.655909 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.754762 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-config\") pod \"route-controller-manager-76f567b4cc-wx87k\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.755120 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-config\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.755153 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-client-ca\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.755210 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-client-ca\") pod \"route-controller-manager-76f567b4cc-wx87k\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.755264 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de12e404-51e0-4b46-939c-3e4d4f9fbe13-serving-cert\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.755287 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxxvp\" (UniqueName: \"kubernetes.io/projected/de12e404-51e0-4b46-939c-3e4d4f9fbe13-kube-api-access-vxxvp\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.755335 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-serving-cert\") pod \"route-controller-manager-76f567b4cc-wx87k\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.755359 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnkkl\" (UniqueName: \"kubernetes.io/projected/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-kube-api-access-vnkkl\") pod \"route-controller-manager-76f567b4cc-wx87k\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.755442 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-proxy-ca-bundles\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.856375 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-proxy-ca-bundles\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.856481 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-config\") pod \"route-controller-manager-76f567b4cc-wx87k\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.856513 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-config\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.856541 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-client-ca\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.856561 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-client-ca\") pod \"route-controller-manager-76f567b4cc-wx87k\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.856587 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxxvp\" (UniqueName: \"kubernetes.io/projected/de12e404-51e0-4b46-939c-3e4d4f9fbe13-kube-api-access-vxxvp\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.856609 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de12e404-51e0-4b46-939c-3e4d4f9fbe13-serving-cert\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.856640 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-serving-cert\") pod \"route-controller-manager-76f567b4cc-wx87k\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.856706 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnkkl\" (UniqueName: \"kubernetes.io/projected/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-kube-api-access-vnkkl\") pod \"route-controller-manager-76f567b4cc-wx87k\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.857486 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-client-ca\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.858082 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-client-ca\") pod \"route-controller-manager-76f567b4cc-wx87k\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.858195 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-config\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.858585 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-proxy-ca-bundles\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.858838 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-config\") pod \"route-controller-manager-76f567b4cc-wx87k\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.862679 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de12e404-51e0-4b46-939c-3e4d4f9fbe13-serving-cert\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.862752 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-serving-cert\") pod \"route-controller-manager-76f567b4cc-wx87k\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.872653 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxxvp\" (UniqueName: \"kubernetes.io/projected/de12e404-51e0-4b46-939c-3e4d4f9fbe13-kube-api-access-vxxvp\") pod \"controller-manager-5b9c9894ff-7ssvr\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.872872 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnkkl\" (UniqueName: \"kubernetes.io/projected/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-kube-api-access-vnkkl\") pod \"route-controller-manager-76f567b4cc-wx87k\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.968808 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:08 crc kubenswrapper[4732]: I0131 09:06:08.977836 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:09 crc kubenswrapper[4732]: I0131 09:06:09.220151 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr"] Jan 31 09:06:09 crc kubenswrapper[4732]: I0131 09:06:09.267481 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k"] Jan 31 09:06:09 crc kubenswrapper[4732]: W0131 09:06:09.273159 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59cbb5c4_c743_47e1_8dc3_e4be5ddd3594.slice/crio-66a60772a896bec1c7562b8662934cf79b67703e7fa04d7571b157999e2e5ce1 WatchSource:0}: Error finding container 66a60772a896bec1c7562b8662934cf79b67703e7fa04d7571b157999e2e5ce1: Status 404 returned error can't find the container with id 66a60772a896bec1c7562b8662934cf79b67703e7fa04d7571b157999e2e5ce1 Jan 31 09:06:09 crc kubenswrapper[4732]: I0131 09:06:09.580620 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" event={"ID":"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594","Type":"ContainerStarted","Data":"a79fc2d373557ec62d345e067f38d8a9c1ffa9f0c16b680a9e29b25ab06e7552"} Jan 31 09:06:09 crc kubenswrapper[4732]: I0131 09:06:09.581904 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" event={"ID":"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594","Type":"ContainerStarted","Data":"66a60772a896bec1c7562b8662934cf79b67703e7fa04d7571b157999e2e5ce1"} Jan 31 09:06:09 crc kubenswrapper[4732]: I0131 09:06:09.581990 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:09 crc kubenswrapper[4732]: I0131 09:06:09.582898 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" event={"ID":"de12e404-51e0-4b46-939c-3e4d4f9fbe13","Type":"ContainerStarted","Data":"37d826fc8b67119e9529dfec023e0e9d5aa68c4387172bcc00d095d536059056"} Jan 31 09:06:09 crc kubenswrapper[4732]: I0131 09:06:09.582997 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" event={"ID":"de12e404-51e0-4b46-939c-3e4d4f9fbe13","Type":"ContainerStarted","Data":"26c9134abbeaeaee484b07f52184f5d710a6aba6bd8ddf53fc947ed96eeb5d71"} Jan 31 09:06:09 crc kubenswrapper[4732]: I0131 09:06:09.583843 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:09 crc kubenswrapper[4732]: I0131 09:06:09.597150 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:09 crc kubenswrapper[4732]: I0131 09:06:09.615463 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" podStartSLOduration=2.615443365 podStartE2EDuration="2.615443365s" podCreationTimestamp="2026-01-31 09:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:06:09.613827401 +0000 UTC m=+307.919703615" watchObservedRunningTime="2026-01-31 09:06:09.615443365 +0000 UTC m=+307.921319569" Jan 31 09:06:09 crc kubenswrapper[4732]: I0131 09:06:09.632699 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" podStartSLOduration=2.632683686 podStartE2EDuration="2.632683686s" podCreationTimestamp="2026-01-31 09:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:06:09.63067442 +0000 UTC m=+307.936550624" watchObservedRunningTime="2026-01-31 09:06:09.632683686 +0000 UTC m=+307.938559890" Jan 31 09:06:09 crc kubenswrapper[4732]: I0131 09:06:09.947972 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:11 crc kubenswrapper[4732]: I0131 09:06:11.584970 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr"] Jan 31 09:06:11 crc kubenswrapper[4732]: I0131 09:06:11.595039 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k"] Jan 31 09:06:12 crc kubenswrapper[4732]: I0131 09:06:12.611456 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" podUID="59cbb5c4-c743-47e1-8dc3-e4be5ddd3594" containerName="route-controller-manager" containerID="cri-o://a79fc2d373557ec62d345e067f38d8a9c1ffa9f0c16b680a9e29b25ab06e7552" gracePeriod=30 Jan 31 09:06:12 crc kubenswrapper[4732]: I0131 09:06:12.612200 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" podUID="de12e404-51e0-4b46-939c-3e4d4f9fbe13" containerName="controller-manager" containerID="cri-o://37d826fc8b67119e9529dfec023e0e9d5aa68c4387172bcc00d095d536059056" gracePeriod=30 Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.029960 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.126047 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-client-ca\") pod \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.126187 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnkkl\" (UniqueName: \"kubernetes.io/projected/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-kube-api-access-vnkkl\") pod \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.126218 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-config\") pod \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.126241 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-serving-cert\") pod \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\" (UID: \"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594\") " Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.127604 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-client-ca" (OuterVolumeSpecName: "client-ca") pod "59cbb5c4-c743-47e1-8dc3-e4be5ddd3594" (UID: "59cbb5c4-c743-47e1-8dc3-e4be5ddd3594"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.127824 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-config" (OuterVolumeSpecName: "config") pod "59cbb5c4-c743-47e1-8dc3-e4be5ddd3594" (UID: "59cbb5c4-c743-47e1-8dc3-e4be5ddd3594"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.133704 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59cbb5c4-c743-47e1-8dc3-e4be5ddd3594" (UID: "59cbb5c4-c743-47e1-8dc3-e4be5ddd3594"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.135862 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-kube-api-access-vnkkl" (OuterVolumeSpecName: "kube-api-access-vnkkl") pod "59cbb5c4-c743-47e1-8dc3-e4be5ddd3594" (UID: "59cbb5c4-c743-47e1-8dc3-e4be5ddd3594"). InnerVolumeSpecName "kube-api-access-vnkkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.164387 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.227461 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de12e404-51e0-4b46-939c-3e4d4f9fbe13-serving-cert\") pod \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.227508 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxxvp\" (UniqueName: \"kubernetes.io/projected/de12e404-51e0-4b46-939c-3e4d4f9fbe13-kube-api-access-vxxvp\") pod \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.227534 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-client-ca\") pod \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.227573 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-config\") pod \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.227756 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-proxy-ca-bundles\") pod \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\" (UID: \"de12e404-51e0-4b46-939c-3e4d4f9fbe13\") " Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.228837 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnkkl\" (UniqueName: \"kubernetes.io/projected/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-kube-api-access-vnkkl\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.229080 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.229090 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.229102 4732 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.228272 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-client-ca" (OuterVolumeSpecName: "client-ca") pod "de12e404-51e0-4b46-939c-3e4d4f9fbe13" (UID: "de12e404-51e0-4b46-939c-3e4d4f9fbe13"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.228299 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "de12e404-51e0-4b46-939c-3e4d4f9fbe13" (UID: "de12e404-51e0-4b46-939c-3e4d4f9fbe13"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.228564 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-config" (OuterVolumeSpecName: "config") pod "de12e404-51e0-4b46-939c-3e4d4f9fbe13" (UID: "de12e404-51e0-4b46-939c-3e4d4f9fbe13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.230713 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de12e404-51e0-4b46-939c-3e4d4f9fbe13-kube-api-access-vxxvp" (OuterVolumeSpecName: "kube-api-access-vxxvp") pod "de12e404-51e0-4b46-939c-3e4d4f9fbe13" (UID: "de12e404-51e0-4b46-939c-3e4d4f9fbe13"). InnerVolumeSpecName "kube-api-access-vxxvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.230931 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de12e404-51e0-4b46-939c-3e4d4f9fbe13-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "de12e404-51e0-4b46-939c-3e4d4f9fbe13" (UID: "de12e404-51e0-4b46-939c-3e4d4f9fbe13"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.330648 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.330703 4732 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.330716 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de12e404-51e0-4b46-939c-3e4d4f9fbe13-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.330727 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxxvp\" (UniqueName: \"kubernetes.io/projected/de12e404-51e0-4b46-939c-3e4d4f9fbe13-kube-api-access-vxxvp\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.330743 4732 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de12e404-51e0-4b46-939c-3e4d4f9fbe13-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.621637 4732 generic.go:334] "Generic (PLEG): container finished" podID="59cbb5c4-c743-47e1-8dc3-e4be5ddd3594" containerID="a79fc2d373557ec62d345e067f38d8a9c1ffa9f0c16b680a9e29b25ab06e7552" exitCode=0 Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.621684 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.621706 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" event={"ID":"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594","Type":"ContainerDied","Data":"a79fc2d373557ec62d345e067f38d8a9c1ffa9f0c16b680a9e29b25ab06e7552"} Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.621956 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k" event={"ID":"59cbb5c4-c743-47e1-8dc3-e4be5ddd3594","Type":"ContainerDied","Data":"66a60772a896bec1c7562b8662934cf79b67703e7fa04d7571b157999e2e5ce1"} Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.622022 4732 scope.go:117] "RemoveContainer" containerID="a79fc2d373557ec62d345e067f38d8a9c1ffa9f0c16b680a9e29b25ab06e7552" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.623957 4732 generic.go:334] "Generic (PLEG): container finished" podID="de12e404-51e0-4b46-939c-3e4d4f9fbe13" containerID="37d826fc8b67119e9529dfec023e0e9d5aa68c4387172bcc00d095d536059056" exitCode=0 Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.624003 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" event={"ID":"de12e404-51e0-4b46-939c-3e4d4f9fbe13","Type":"ContainerDied","Data":"37d826fc8b67119e9529dfec023e0e9d5aa68c4387172bcc00d095d536059056"} Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.624039 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" event={"ID":"de12e404-51e0-4b46-939c-3e4d4f9fbe13","Type":"ContainerDied","Data":"26c9134abbeaeaee484b07f52184f5d710a6aba6bd8ddf53fc947ed96eeb5d71"} Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.624887 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.648517 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd"] Jan 31 09:06:13 crc kubenswrapper[4732]: E0131 09:06:13.648950 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de12e404-51e0-4b46-939c-3e4d4f9fbe13" containerName="controller-manager" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.648973 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="de12e404-51e0-4b46-939c-3e4d4f9fbe13" containerName="controller-manager" Jan 31 09:06:13 crc kubenswrapper[4732]: E0131 09:06:13.648997 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59cbb5c4-c743-47e1-8dc3-e4be5ddd3594" containerName="route-controller-manager" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.649007 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="59cbb5c4-c743-47e1-8dc3-e4be5ddd3594" containerName="route-controller-manager" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.649483 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="de12e404-51e0-4b46-939c-3e4d4f9fbe13" containerName="controller-manager" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.649509 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="59cbb5c4-c743-47e1-8dc3-e4be5ddd3594" containerName="route-controller-manager" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.649980 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.654633 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.654726 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.654914 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.654633 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.656812 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.657859 4732 scope.go:117] "RemoveContainer" containerID="a79fc2d373557ec62d345e067f38d8a9c1ffa9f0c16b680a9e29b25ab06e7552" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.659339 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.660438 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn"] Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.666540 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: E0131 09:06:13.667594 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a79fc2d373557ec62d345e067f38d8a9c1ffa9f0c16b680a9e29b25ab06e7552\": container with ID starting with a79fc2d373557ec62d345e067f38d8a9c1ffa9f0c16b680a9e29b25ab06e7552 not found: ID does not exist" containerID="a79fc2d373557ec62d345e067f38d8a9c1ffa9f0c16b680a9e29b25ab06e7552" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.667634 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a79fc2d373557ec62d345e067f38d8a9c1ffa9f0c16b680a9e29b25ab06e7552"} err="failed to get container status \"a79fc2d373557ec62d345e067f38d8a9c1ffa9f0c16b680a9e29b25ab06e7552\": rpc error: code = NotFound desc = could not find container \"a79fc2d373557ec62d345e067f38d8a9c1ffa9f0c16b680a9e29b25ab06e7552\": container with ID starting with a79fc2d373557ec62d345e067f38d8a9c1ffa9f0c16b680a9e29b25ab06e7552 not found: ID does not exist" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.667702 4732 scope.go:117] "RemoveContainer" containerID="37d826fc8b67119e9529dfec023e0e9d5aa68c4387172bcc00d095d536059056" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.670759 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.670977 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.673015 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.673352 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.684576 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.701529 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.701822 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.703638 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd"] Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.705539 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn"] Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.718323 4732 scope.go:117] "RemoveContainer" containerID="37d826fc8b67119e9529dfec023e0e9d5aa68c4387172bcc00d095d536059056" Jan 31 09:06:13 crc kubenswrapper[4732]: E0131 09:06:13.718925 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d826fc8b67119e9529dfec023e0e9d5aa68c4387172bcc00d095d536059056\": container with ID starting with 37d826fc8b67119e9529dfec023e0e9d5aa68c4387172bcc00d095d536059056 not found: ID does not exist" containerID="37d826fc8b67119e9529dfec023e0e9d5aa68c4387172bcc00d095d536059056" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.719080 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d826fc8b67119e9529dfec023e0e9d5aa68c4387172bcc00d095d536059056"} err="failed to get container status \"37d826fc8b67119e9529dfec023e0e9d5aa68c4387172bcc00d095d536059056\": rpc error: code = NotFound desc = could not find container \"37d826fc8b67119e9529dfec023e0e9d5aa68c4387172bcc00d095d536059056\": container with ID starting with 37d826fc8b67119e9529dfec023e0e9d5aa68c4387172bcc00d095d536059056 not found: ID does not exist" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.734991 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k"] Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.736798 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87d41f5c-8722-4115-b4b2-06493d6f18e2-client-ca\") pod \"route-controller-manager-6685f4fd5b-k4npd\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.736860 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-client-ca\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.736881 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87d41f5c-8722-4115-b4b2-06493d6f18e2-serving-cert\") pod \"route-controller-manager-6685f4fd5b-k4npd\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.736905 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d41f5c-8722-4115-b4b2-06493d6f18e2-config\") pod \"route-controller-manager-6685f4fd5b-k4npd\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.736921 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz7s9\" (UniqueName: \"kubernetes.io/projected/07f12b30-c71f-4cf5-88b2-06c78ce8243a-kube-api-access-wz7s9\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.736946 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-proxy-ca-bundles\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.736968 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbm5f\" (UniqueName: \"kubernetes.io/projected/87d41f5c-8722-4115-b4b2-06493d6f18e2-kube-api-access-hbm5f\") pod \"route-controller-manager-6685f4fd5b-k4npd\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.736985 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-config\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.737006 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07f12b30-c71f-4cf5-88b2-06c78ce8243a-serving-cert\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.739422 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76f567b4cc-wx87k"] Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.749448 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr"] Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.753222 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b9c9894ff-7ssvr"] Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.837650 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-client-ca\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.837705 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87d41f5c-8722-4115-b4b2-06493d6f18e2-serving-cert\") pod \"route-controller-manager-6685f4fd5b-k4npd\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.837738 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz7s9\" (UniqueName: \"kubernetes.io/projected/07f12b30-c71f-4cf5-88b2-06c78ce8243a-kube-api-access-wz7s9\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.837756 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d41f5c-8722-4115-b4b2-06493d6f18e2-config\") pod \"route-controller-manager-6685f4fd5b-k4npd\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.837779 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-proxy-ca-bundles\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.837798 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbm5f\" (UniqueName: \"kubernetes.io/projected/87d41f5c-8722-4115-b4b2-06493d6f18e2-kube-api-access-hbm5f\") pod \"route-controller-manager-6685f4fd5b-k4npd\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.837817 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-config\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.837835 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07f12b30-c71f-4cf5-88b2-06c78ce8243a-serving-cert\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.837866 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87d41f5c-8722-4115-b4b2-06493d6f18e2-client-ca\") pod \"route-controller-manager-6685f4fd5b-k4npd\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.839161 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87d41f5c-8722-4115-b4b2-06493d6f18e2-client-ca\") pod \"route-controller-manager-6685f4fd5b-k4npd\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.839865 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d41f5c-8722-4115-b4b2-06493d6f18e2-config\") pod \"route-controller-manager-6685f4fd5b-k4npd\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.839870 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-proxy-ca-bundles\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.840018 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-config\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.840329 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-client-ca\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.844880 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07f12b30-c71f-4cf5-88b2-06c78ce8243a-serving-cert\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.847583 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87d41f5c-8722-4115-b4b2-06493d6f18e2-serving-cert\") pod \"route-controller-manager-6685f4fd5b-k4npd\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.865265 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz7s9\" (UniqueName: \"kubernetes.io/projected/07f12b30-c71f-4cf5-88b2-06c78ce8243a-kube-api-access-wz7s9\") pod \"controller-manager-7568f5d7c4-7jqxn\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:13 crc kubenswrapper[4732]: I0131 09:06:13.865753 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbm5f\" (UniqueName: \"kubernetes.io/projected/87d41f5c-8722-4115-b4b2-06493d6f18e2-kube-api-access-hbm5f\") pod \"route-controller-manager-6685f4fd5b-k4npd\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.008761 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.030536 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.247454 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd"] Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.347731 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn"] Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.548840 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59cbb5c4-c743-47e1-8dc3-e4be5ddd3594" path="/var/lib/kubelet/pods/59cbb5c4-c743-47e1-8dc3-e4be5ddd3594/volumes" Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.550010 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de12e404-51e0-4b46-939c-3e4d4f9fbe13" path="/var/lib/kubelet/pods/de12e404-51e0-4b46-939c-3e4d4f9fbe13/volumes" Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.631193 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" event={"ID":"87d41f5c-8722-4115-b4b2-06493d6f18e2","Type":"ContainerStarted","Data":"20b470d8cff837870ae5ee797e7d03af879664119847524316640e3d39a1bbb8"} Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.631248 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" event={"ID":"87d41f5c-8722-4115-b4b2-06493d6f18e2","Type":"ContainerStarted","Data":"827277a6a635c6307caf7bd68b15efb68e2fe5da6e8b51a1e5a77e41d83b1851"} Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.631429 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.643702 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" event={"ID":"07f12b30-c71f-4cf5-88b2-06c78ce8243a","Type":"ContainerStarted","Data":"46b96dd899c9517fcc86481d9a15637e4e6eede11894fb27964aa47c819db70c"} Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.643894 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" event={"ID":"07f12b30-c71f-4cf5-88b2-06c78ce8243a","Type":"ContainerStarted","Data":"89c7a5bad7e33eac41492a051d188754a5f8894d0eb4a7eb8aeaa6a70e8cf9f2"} Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.644837 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.649881 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.656094 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" podStartSLOduration=2.656072475 podStartE2EDuration="2.656072475s" podCreationTimestamp="2026-01-31 09:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:06:14.65108565 +0000 UTC m=+312.956961854" watchObservedRunningTime="2026-01-31 09:06:14.656072475 +0000 UTC m=+312.961948679" Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.667496 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" podStartSLOduration=2.667480383 podStartE2EDuration="2.667480383s" podCreationTimestamp="2026-01-31 09:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:06:14.664975951 +0000 UTC m=+312.970852155" watchObservedRunningTime="2026-01-31 09:06:14.667480383 +0000 UTC m=+312.973356587" Jan 31 09:06:14 crc kubenswrapper[4732]: I0131 09:06:14.862552 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.140982 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tr5nx"] Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.143197 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.155696 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tr5nx"] Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.245104 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-registry-certificates\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.245175 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnjrv\" (UniqueName: \"kubernetes.io/projected/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-kube-api-access-xnjrv\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.245227 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.245261 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.245291 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-registry-tls\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.245746 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-bound-sa-token\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.245848 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.245915 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-trusted-ca\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.269159 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.347225 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-bound-sa-token\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.347293 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.347328 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-trusted-ca\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.347352 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-registry-certificates\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.347388 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnjrv\" (UniqueName: \"kubernetes.io/projected/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-kube-api-access-xnjrv\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.347421 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.347454 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-registry-tls\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.348206 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.348951 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-registry-certificates\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.349142 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-trusted-ca\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.354334 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.357202 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-registry-tls\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.365224 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnjrv\" (UniqueName: \"kubernetes.io/projected/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-kube-api-access-xnjrv\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.366738 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b-bound-sa-token\") pod \"image-registry-66df7c8f76-tr5nx\" (UID: \"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b\") " pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.499091 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:21 crc kubenswrapper[4732]: I0131 09:06:21.912103 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tr5nx"] Jan 31 09:06:22 crc kubenswrapper[4732]: I0131 09:06:22.688193 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" event={"ID":"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b","Type":"ContainerStarted","Data":"e173d17a098c3393888c2a9997cdb0ed89bf1e06903224299483c05122bf1637"} Jan 31 09:06:22 crc kubenswrapper[4732]: I0131 09:06:22.688580 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" event={"ID":"819ce8a1-8a1b-4e4d-af2a-dbf4f2b30e0b","Type":"ContainerStarted","Data":"94ba3fc88647739e6f295d286d60a1d1ef59ffd73b743387bfa28e77b020de4b"} Jan 31 09:06:22 crc kubenswrapper[4732]: I0131 09:06:22.688635 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:22 crc kubenswrapper[4732]: I0131 09:06:22.711590 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" podStartSLOduration=1.711561454 podStartE2EDuration="1.711561454s" podCreationTimestamp="2026-01-31 09:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:06:22.711274294 +0000 UTC m=+321.017150508" watchObservedRunningTime="2026-01-31 09:06:22.711561454 +0000 UTC m=+321.017437658" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.090648 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn"] Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.091183 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" podUID="07f12b30-c71f-4cf5-88b2-06c78ce8243a" containerName="controller-manager" containerID="cri-o://46b96dd899c9517fcc86481d9a15637e4e6eede11894fb27964aa47c819db70c" gracePeriod=30 Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.594725 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.631016 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-client-ca\") pod \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.631058 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-config\") pod \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.631186 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-proxy-ca-bundles\") pod \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.631222 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz7s9\" (UniqueName: \"kubernetes.io/projected/07f12b30-c71f-4cf5-88b2-06c78ce8243a-kube-api-access-wz7s9\") pod \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.631250 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07f12b30-c71f-4cf5-88b2-06c78ce8243a-serving-cert\") pod \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\" (UID: \"07f12b30-c71f-4cf5-88b2-06c78ce8243a\") " Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.633194 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-config" (OuterVolumeSpecName: "config") pod "07f12b30-c71f-4cf5-88b2-06c78ce8243a" (UID: "07f12b30-c71f-4cf5-88b2-06c78ce8243a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.633574 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-client-ca" (OuterVolumeSpecName: "client-ca") pod "07f12b30-c71f-4cf5-88b2-06c78ce8243a" (UID: "07f12b30-c71f-4cf5-88b2-06c78ce8243a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.634182 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "07f12b30-c71f-4cf5-88b2-06c78ce8243a" (UID: "07f12b30-c71f-4cf5-88b2-06c78ce8243a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.638348 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f12b30-c71f-4cf5-88b2-06c78ce8243a-kube-api-access-wz7s9" (OuterVolumeSpecName: "kube-api-access-wz7s9") pod "07f12b30-c71f-4cf5-88b2-06c78ce8243a" (UID: "07f12b30-c71f-4cf5-88b2-06c78ce8243a"). InnerVolumeSpecName "kube-api-access-wz7s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.640160 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f12b30-c71f-4cf5-88b2-06c78ce8243a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "07f12b30-c71f-4cf5-88b2-06c78ce8243a" (UID: "07f12b30-c71f-4cf5-88b2-06c78ce8243a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.725371 4732 generic.go:334] "Generic (PLEG): container finished" podID="07f12b30-c71f-4cf5-88b2-06c78ce8243a" containerID="46b96dd899c9517fcc86481d9a15637e4e6eede11894fb27964aa47c819db70c" exitCode=0 Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.725419 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" event={"ID":"07f12b30-c71f-4cf5-88b2-06c78ce8243a","Type":"ContainerDied","Data":"46b96dd899c9517fcc86481d9a15637e4e6eede11894fb27964aa47c819db70c"} Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.725458 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" event={"ID":"07f12b30-c71f-4cf5-88b2-06c78ce8243a","Type":"ContainerDied","Data":"89c7a5bad7e33eac41492a051d188754a5f8894d0eb4a7eb8aeaa6a70e8cf9f2"} Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.725477 4732 scope.go:117] "RemoveContainer" containerID="46b96dd899c9517fcc86481d9a15637e4e6eede11894fb27964aa47c819db70c" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.725485 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.734889 4732 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.735082 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz7s9\" (UniqueName: \"kubernetes.io/projected/07f12b30-c71f-4cf5-88b2-06c78ce8243a-kube-api-access-wz7s9\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.735142 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07f12b30-c71f-4cf5-88b2-06c78ce8243a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.735191 4732 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.735241 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07f12b30-c71f-4cf5-88b2-06c78ce8243a-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.742530 4732 scope.go:117] "RemoveContainer" containerID="46b96dd899c9517fcc86481d9a15637e4e6eede11894fb27964aa47c819db70c" Jan 31 09:06:27 crc kubenswrapper[4732]: E0131 09:06:27.742917 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46b96dd899c9517fcc86481d9a15637e4e6eede11894fb27964aa47c819db70c\": container with ID starting with 46b96dd899c9517fcc86481d9a15637e4e6eede11894fb27964aa47c819db70c not found: ID does not exist" containerID="46b96dd899c9517fcc86481d9a15637e4e6eede11894fb27964aa47c819db70c" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.742956 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46b96dd899c9517fcc86481d9a15637e4e6eede11894fb27964aa47c819db70c"} err="failed to get container status \"46b96dd899c9517fcc86481d9a15637e4e6eede11894fb27964aa47c819db70c\": rpc error: code = NotFound desc = could not find container \"46b96dd899c9517fcc86481d9a15637e4e6eede11894fb27964aa47c819db70c\": container with ID starting with 46b96dd899c9517fcc86481d9a15637e4e6eede11894fb27964aa47c819db70c not found: ID does not exist" Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.785563 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn"] Jan 31 09:06:27 crc kubenswrapper[4732]: I0131 09:06:27.793001 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-7jqxn"] Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.551179 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f12b30-c71f-4cf5-88b2-06c78ce8243a" path="/var/lib/kubelet/pods/07f12b30-c71f-4cf5-88b2-06c78ce8243a/volumes" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.649414 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77cc4f456b-9m7qc"] Jan 31 09:06:28 crc kubenswrapper[4732]: E0131 09:06:28.649687 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f12b30-c71f-4cf5-88b2-06c78ce8243a" containerName="controller-manager" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.649716 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f12b30-c71f-4cf5-88b2-06c78ce8243a" containerName="controller-manager" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.649834 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f12b30-c71f-4cf5-88b2-06c78ce8243a" containerName="controller-manager" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.650252 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.657523 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.657708 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.657807 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.658056 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.658282 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.658542 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.665472 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.665922 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77cc4f456b-9m7qc"] Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.748584 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-serving-cert\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.748638 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-client-ca\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.748704 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-proxy-ca-bundles\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.748752 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwxs6\" (UniqueName: \"kubernetes.io/projected/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-kube-api-access-qwxs6\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.748860 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-config\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.850935 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-config\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.851015 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-serving-cert\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.851083 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-client-ca\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.851139 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-proxy-ca-bundles\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.851173 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwxs6\" (UniqueName: \"kubernetes.io/projected/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-kube-api-access-qwxs6\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.852432 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-proxy-ca-bundles\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.852478 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-client-ca\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.853247 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-config\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.855925 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-serving-cert\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.874457 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwxs6\" (UniqueName: \"kubernetes.io/projected/5be9f2ca-96a2-4e31-8ee7-6848e08c1833-kube-api-access-qwxs6\") pod \"controller-manager-77cc4f456b-9m7qc\" (UID: \"5be9f2ca-96a2-4e31-8ee7-6848e08c1833\") " pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:28 crc kubenswrapper[4732]: I0131 09:06:28.972325 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:29 crc kubenswrapper[4732]: I0131 09:06:29.387318 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77cc4f456b-9m7qc"] Jan 31 09:06:29 crc kubenswrapper[4732]: W0131 09:06:29.395519 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5be9f2ca_96a2_4e31_8ee7_6848e08c1833.slice/crio-13021a9007deab986c417d6f9474668c43e92c709e87b1c73c8b29ff4042a64d WatchSource:0}: Error finding container 13021a9007deab986c417d6f9474668c43e92c709e87b1c73c8b29ff4042a64d: Status 404 returned error can't find the container with id 13021a9007deab986c417d6f9474668c43e92c709e87b1c73c8b29ff4042a64d Jan 31 09:06:29 crc kubenswrapper[4732]: I0131 09:06:29.742887 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" event={"ID":"5be9f2ca-96a2-4e31-8ee7-6848e08c1833","Type":"ContainerStarted","Data":"d01d1e298dbf72d8fa542814429bb8e1c8f2cafa02648a97d342a9131ae4ceb2"} Jan 31 09:06:29 crc kubenswrapper[4732]: I0131 09:06:29.744152 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:29 crc kubenswrapper[4732]: I0131 09:06:29.744215 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" event={"ID":"5be9f2ca-96a2-4e31-8ee7-6848e08c1833","Type":"ContainerStarted","Data":"13021a9007deab986c417d6f9474668c43e92c709e87b1c73c8b29ff4042a64d"} Jan 31 09:06:29 crc kubenswrapper[4732]: I0131 09:06:29.752637 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" Jan 31 09:06:29 crc kubenswrapper[4732]: I0131 09:06:29.763440 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77cc4f456b-9m7qc" podStartSLOduration=2.763419639 podStartE2EDuration="2.763419639s" podCreationTimestamp="2026-01-31 09:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:06:29.758561237 +0000 UTC m=+328.064437461" watchObservedRunningTime="2026-01-31 09:06:29.763419639 +0000 UTC m=+328.069295843" Jan 31 09:06:41 crc kubenswrapper[4732]: I0131 09:06:41.507824 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tr5nx" Jan 31 09:06:41 crc kubenswrapper[4732]: I0131 09:06:41.564979 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-99dtb"] Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.133016 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd"] Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.133822 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" podUID="87d41f5c-8722-4115-b4b2-06493d6f18e2" containerName="route-controller-manager" containerID="cri-o://20b470d8cff837870ae5ee797e7d03af879664119847524316640e3d39a1bbb8" gracePeriod=30 Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.770001 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.817789 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87d41f5c-8722-4115-b4b2-06493d6f18e2-serving-cert\") pod \"87d41f5c-8722-4115-b4b2-06493d6f18e2\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.817849 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87d41f5c-8722-4115-b4b2-06493d6f18e2-client-ca\") pod \"87d41f5c-8722-4115-b4b2-06493d6f18e2\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.817892 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbm5f\" (UniqueName: \"kubernetes.io/projected/87d41f5c-8722-4115-b4b2-06493d6f18e2-kube-api-access-hbm5f\") pod \"87d41f5c-8722-4115-b4b2-06493d6f18e2\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.817930 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d41f5c-8722-4115-b4b2-06493d6f18e2-config\") pod \"87d41f5c-8722-4115-b4b2-06493d6f18e2\" (UID: \"87d41f5c-8722-4115-b4b2-06493d6f18e2\") " Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.818699 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87d41f5c-8722-4115-b4b2-06493d6f18e2-client-ca" (OuterVolumeSpecName: "client-ca") pod "87d41f5c-8722-4115-b4b2-06493d6f18e2" (UID: "87d41f5c-8722-4115-b4b2-06493d6f18e2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.818790 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87d41f5c-8722-4115-b4b2-06493d6f18e2-config" (OuterVolumeSpecName: "config") pod "87d41f5c-8722-4115-b4b2-06493d6f18e2" (UID: "87d41f5c-8722-4115-b4b2-06493d6f18e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.827577 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d41f5c-8722-4115-b4b2-06493d6f18e2-kube-api-access-hbm5f" (OuterVolumeSpecName: "kube-api-access-hbm5f") pod "87d41f5c-8722-4115-b4b2-06493d6f18e2" (UID: "87d41f5c-8722-4115-b4b2-06493d6f18e2"). InnerVolumeSpecName "kube-api-access-hbm5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.829751 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d41f5c-8722-4115-b4b2-06493d6f18e2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "87d41f5c-8722-4115-b4b2-06493d6f18e2" (UID: "87d41f5c-8722-4115-b4b2-06493d6f18e2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.843481 4732 generic.go:334] "Generic (PLEG): container finished" podID="87d41f5c-8722-4115-b4b2-06493d6f18e2" containerID="20b470d8cff837870ae5ee797e7d03af879664119847524316640e3d39a1bbb8" exitCode=0 Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.843555 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.843586 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" event={"ID":"87d41f5c-8722-4115-b4b2-06493d6f18e2","Type":"ContainerDied","Data":"20b470d8cff837870ae5ee797e7d03af879664119847524316640e3d39a1bbb8"} Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.843636 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd" event={"ID":"87d41f5c-8722-4115-b4b2-06493d6f18e2","Type":"ContainerDied","Data":"827277a6a635c6307caf7bd68b15efb68e2fe5da6e8b51a1e5a77e41d83b1851"} Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.843712 4732 scope.go:117] "RemoveContainer" containerID="20b470d8cff837870ae5ee797e7d03af879664119847524316640e3d39a1bbb8" Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.867135 4732 scope.go:117] "RemoveContainer" containerID="20b470d8cff837870ae5ee797e7d03af879664119847524316640e3d39a1bbb8" Jan 31 09:06:47 crc kubenswrapper[4732]: E0131 09:06:47.867560 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20b470d8cff837870ae5ee797e7d03af879664119847524316640e3d39a1bbb8\": container with ID starting with 20b470d8cff837870ae5ee797e7d03af879664119847524316640e3d39a1bbb8 not found: ID does not exist" containerID="20b470d8cff837870ae5ee797e7d03af879664119847524316640e3d39a1bbb8" Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.867621 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20b470d8cff837870ae5ee797e7d03af879664119847524316640e3d39a1bbb8"} err="failed to get container status \"20b470d8cff837870ae5ee797e7d03af879664119847524316640e3d39a1bbb8\": rpc error: code = NotFound desc = could not find container \"20b470d8cff837870ae5ee797e7d03af879664119847524316640e3d39a1bbb8\": container with ID starting with 20b470d8cff837870ae5ee797e7d03af879664119847524316640e3d39a1bbb8 not found: ID does not exist" Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.881788 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd"] Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.883973 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-k4npd"] Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.919611 4732 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87d41f5c-8722-4115-b4b2-06493d6f18e2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.919654 4732 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87d41f5c-8722-4115-b4b2-06493d6f18e2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.919714 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbm5f\" (UniqueName: \"kubernetes.io/projected/87d41f5c-8722-4115-b4b2-06493d6f18e2-kube-api-access-hbm5f\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:47 crc kubenswrapper[4732]: I0131 09:06:47.919729 4732 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d41f5c-8722-4115-b4b2-06493d6f18e2-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.556222 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87d41f5c-8722-4115-b4b2-06493d6f18e2" path="/var/lib/kubelet/pods/87d41f5c-8722-4115-b4b2-06493d6f18e2/volumes" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.669002 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf"] Jan 31 09:06:48 crc kubenswrapper[4732]: E0131 09:06:48.669345 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d41f5c-8722-4115-b4b2-06493d6f18e2" containerName="route-controller-manager" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.669376 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d41f5c-8722-4115-b4b2-06493d6f18e2" containerName="route-controller-manager" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.669577 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d41f5c-8722-4115-b4b2-06493d6f18e2" containerName="route-controller-manager" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.670222 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.681320 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.681428 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.681944 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.682498 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.691246 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.699734 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.703131 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf"] Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.730091 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3219d76-a8c9-4166-b326-cf1cd4a31074-client-ca\") pod \"route-controller-manager-d99dc646-4q8hf\" (UID: \"a3219d76-a8c9-4166-b326-cf1cd4a31074\") " pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.730183 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ms8d\" (UniqueName: \"kubernetes.io/projected/a3219d76-a8c9-4166-b326-cf1cd4a31074-kube-api-access-4ms8d\") pod \"route-controller-manager-d99dc646-4q8hf\" (UID: \"a3219d76-a8c9-4166-b326-cf1cd4a31074\") " pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.730251 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3219d76-a8c9-4166-b326-cf1cd4a31074-serving-cert\") pod \"route-controller-manager-d99dc646-4q8hf\" (UID: \"a3219d76-a8c9-4166-b326-cf1cd4a31074\") " pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.730333 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3219d76-a8c9-4166-b326-cf1cd4a31074-config\") pod \"route-controller-manager-d99dc646-4q8hf\" (UID: \"a3219d76-a8c9-4166-b326-cf1cd4a31074\") " pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.831275 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3219d76-a8c9-4166-b326-cf1cd4a31074-client-ca\") pod \"route-controller-manager-d99dc646-4q8hf\" (UID: \"a3219d76-a8c9-4166-b326-cf1cd4a31074\") " pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.831328 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ms8d\" (UniqueName: \"kubernetes.io/projected/a3219d76-a8c9-4166-b326-cf1cd4a31074-kube-api-access-4ms8d\") pod \"route-controller-manager-d99dc646-4q8hf\" (UID: \"a3219d76-a8c9-4166-b326-cf1cd4a31074\") " pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.831373 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3219d76-a8c9-4166-b326-cf1cd4a31074-serving-cert\") pod \"route-controller-manager-d99dc646-4q8hf\" (UID: \"a3219d76-a8c9-4166-b326-cf1cd4a31074\") " pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.831422 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3219d76-a8c9-4166-b326-cf1cd4a31074-config\") pod \"route-controller-manager-d99dc646-4q8hf\" (UID: \"a3219d76-a8c9-4166-b326-cf1cd4a31074\") " pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.832587 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3219d76-a8c9-4166-b326-cf1cd4a31074-client-ca\") pod \"route-controller-manager-d99dc646-4q8hf\" (UID: \"a3219d76-a8c9-4166-b326-cf1cd4a31074\") " pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.832747 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3219d76-a8c9-4166-b326-cf1cd4a31074-config\") pod \"route-controller-manager-d99dc646-4q8hf\" (UID: \"a3219d76-a8c9-4166-b326-cf1cd4a31074\") " pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.838438 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3219d76-a8c9-4166-b326-cf1cd4a31074-serving-cert\") pod \"route-controller-manager-d99dc646-4q8hf\" (UID: \"a3219d76-a8c9-4166-b326-cf1cd4a31074\") " pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.851856 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ms8d\" (UniqueName: \"kubernetes.io/projected/a3219d76-a8c9-4166-b326-cf1cd4a31074-kube-api-access-4ms8d\") pod \"route-controller-manager-d99dc646-4q8hf\" (UID: \"a3219d76-a8c9-4166-b326-cf1cd4a31074\") " pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:48 crc kubenswrapper[4732]: I0131 09:06:48.989553 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:49 crc kubenswrapper[4732]: I0131 09:06:49.403801 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf"] Jan 31 09:06:49 crc kubenswrapper[4732]: W0131 09:06:49.413829 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3219d76_a8c9_4166_b326_cf1cd4a31074.slice/crio-bfc06e59c07ca0dcc545078158eb5f8758f312a45261a6fde0df305d70df7eb5 WatchSource:0}: Error finding container bfc06e59c07ca0dcc545078158eb5f8758f312a45261a6fde0df305d70df7eb5: Status 404 returned error can't find the container with id bfc06e59c07ca0dcc545078158eb5f8758f312a45261a6fde0df305d70df7eb5 Jan 31 09:06:49 crc kubenswrapper[4732]: I0131 09:06:49.857028 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" event={"ID":"a3219d76-a8c9-4166-b326-cf1cd4a31074","Type":"ContainerStarted","Data":"4a0298efb06452f5c63ab2b3c2798a83000c125a58f1161f38fb855be1f716c9"} Jan 31 09:06:49 crc kubenswrapper[4732]: I0131 09:06:49.857072 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" event={"ID":"a3219d76-a8c9-4166-b326-cf1cd4a31074","Type":"ContainerStarted","Data":"bfc06e59c07ca0dcc545078158eb5f8758f312a45261a6fde0df305d70df7eb5"} Jan 31 09:06:49 crc kubenswrapper[4732]: I0131 09:06:49.857325 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:06:49 crc kubenswrapper[4732]: I0131 09:06:49.872138 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" podStartSLOduration=2.8721209180000002 podStartE2EDuration="2.872120918s" podCreationTimestamp="2026-01-31 09:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:06:49.871455708 +0000 UTC m=+348.177331922" watchObservedRunningTime="2026-01-31 09:06:49.872120918 +0000 UTC m=+348.177997122" Jan 31 09:06:50 crc kubenswrapper[4732]: I0131 09:06:50.023390 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d99dc646-4q8hf" Jan 31 09:07:06 crc kubenswrapper[4732]: I0131 09:07:06.611173 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" podUID="4ac602fa-14af-4ae0-a538-d73e938db036" containerName="registry" containerID="cri-o://76229ced9d7ea551eb476a7db3e9648ede271e5ea762d59f5c012cdd16284033" gracePeriod=30 Jan 31 09:07:06 crc kubenswrapper[4732]: I0131 09:07:06.957276 4732 generic.go:334] "Generic (PLEG): container finished" podID="4ac602fa-14af-4ae0-a538-d73e938db036" containerID="76229ced9d7ea551eb476a7db3e9648ede271e5ea762d59f5c012cdd16284033" exitCode=0 Jan 31 09:07:06 crc kubenswrapper[4732]: I0131 09:07:06.957476 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" event={"ID":"4ac602fa-14af-4ae0-a538-d73e938db036","Type":"ContainerDied","Data":"76229ced9d7ea551eb476a7db3e9648ede271e5ea762d59f5c012cdd16284033"} Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.026473 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.172569 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-bound-sa-token\") pod \"4ac602fa-14af-4ae0-a538-d73e938db036\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.172723 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4ac602fa-14af-4ae0-a538-d73e938db036-installation-pull-secrets\") pod \"4ac602fa-14af-4ae0-a538-d73e938db036\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.172828 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb2p9\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-kube-api-access-bb2p9\") pod \"4ac602fa-14af-4ae0-a538-d73e938db036\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.172880 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4ac602fa-14af-4ae0-a538-d73e938db036-ca-trust-extracted\") pod \"4ac602fa-14af-4ae0-a538-d73e938db036\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.173036 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4ac602fa-14af-4ae0-a538-d73e938db036\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.173070 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ac602fa-14af-4ae0-a538-d73e938db036-trusted-ca\") pod \"4ac602fa-14af-4ae0-a538-d73e938db036\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.173096 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-registry-tls\") pod \"4ac602fa-14af-4ae0-a538-d73e938db036\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.173140 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4ac602fa-14af-4ae0-a538-d73e938db036-registry-certificates\") pod \"4ac602fa-14af-4ae0-a538-d73e938db036\" (UID: \"4ac602fa-14af-4ae0-a538-d73e938db036\") " Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.174048 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac602fa-14af-4ae0-a538-d73e938db036-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4ac602fa-14af-4ae0-a538-d73e938db036" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.174450 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac602fa-14af-4ae0-a538-d73e938db036-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4ac602fa-14af-4ae0-a538-d73e938db036" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.178024 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac602fa-14af-4ae0-a538-d73e938db036-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4ac602fa-14af-4ae0-a538-d73e938db036" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.178409 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4ac602fa-14af-4ae0-a538-d73e938db036" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.178886 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-kube-api-access-bb2p9" (OuterVolumeSpecName: "kube-api-access-bb2p9") pod "4ac602fa-14af-4ae0-a538-d73e938db036" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036"). InnerVolumeSpecName "kube-api-access-bb2p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.179181 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4ac602fa-14af-4ae0-a538-d73e938db036" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.195549 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4ac602fa-14af-4ae0-a538-d73e938db036" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.200311 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac602fa-14af-4ae0-a538-d73e938db036-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4ac602fa-14af-4ae0-a538-d73e938db036" (UID: "4ac602fa-14af-4ae0-a538-d73e938db036"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.274776 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb2p9\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-kube-api-access-bb2p9\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.274814 4732 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4ac602fa-14af-4ae0-a538-d73e938db036-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.274823 4732 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ac602fa-14af-4ae0-a538-d73e938db036-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.274831 4732 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.274839 4732 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4ac602fa-14af-4ae0-a538-d73e938db036-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.274847 4732 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ac602fa-14af-4ae0-a538-d73e938db036-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.274854 4732 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4ac602fa-14af-4ae0-a538-d73e938db036-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.965476 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.965461 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-99dtb" event={"ID":"4ac602fa-14af-4ae0-a538-d73e938db036","Type":"ContainerDied","Data":"039e3167eddd030475a90d04176c40d3799eaa481a42473d70722bc67a78215e"} Jan 31 09:07:07 crc kubenswrapper[4732]: I0131 09:07:07.966033 4732 scope.go:117] "RemoveContainer" containerID="76229ced9d7ea551eb476a7db3e9648ede271e5ea762d59f5c012cdd16284033" Jan 31 09:07:08 crc kubenswrapper[4732]: I0131 09:07:08.001227 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-99dtb"] Jan 31 09:07:08 crc kubenswrapper[4732]: I0131 09:07:08.005479 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-99dtb"] Jan 31 09:07:08 crc kubenswrapper[4732]: I0131 09:07:08.550058 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac602fa-14af-4ae0-a538-d73e938db036" path="/var/lib/kubelet/pods/4ac602fa-14af-4ae0-a538-d73e938db036/volumes" Jan 31 09:07:17 crc kubenswrapper[4732]: I0131 09:07:17.498132 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:07:17 crc kubenswrapper[4732]: I0131 09:07:17.498534 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:07:47 crc kubenswrapper[4732]: I0131 09:07:47.498204 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:07:47 crc kubenswrapper[4732]: I0131 09:07:47.498651 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:08:17 crc kubenswrapper[4732]: I0131 09:08:17.497948 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:08:17 crc kubenswrapper[4732]: I0131 09:08:17.498518 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:08:17 crc kubenswrapper[4732]: I0131 09:08:17.498579 4732 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:08:17 crc kubenswrapper[4732]: I0131 09:08:17.499164 4732 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a1af67f6e9c90030eed50fdab77c62259e76a7813864bb504390768e9501756"} pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:08:17 crc kubenswrapper[4732]: I0131 09:08:17.499223 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" containerID="cri-o://1a1af67f6e9c90030eed50fdab77c62259e76a7813864bb504390768e9501756" gracePeriod=600 Jan 31 09:08:18 crc kubenswrapper[4732]: I0131 09:08:18.402637 4732 generic.go:334] "Generic (PLEG): container finished" podID="7d790207-d357-4b47-87bf-5b505e061820" containerID="1a1af67f6e9c90030eed50fdab77c62259e76a7813864bb504390768e9501756" exitCode=0 Jan 31 09:08:18 crc kubenswrapper[4732]: I0131 09:08:18.402720 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerDied","Data":"1a1af67f6e9c90030eed50fdab77c62259e76a7813864bb504390768e9501756"} Jan 31 09:08:18 crc kubenswrapper[4732]: I0131 09:08:18.403003 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerStarted","Data":"942a11834ff55816d19ec94b72706370701e25dcee37029bb97b73b2e3078f9b"} Jan 31 09:08:18 crc kubenswrapper[4732]: I0131 09:08:18.403034 4732 scope.go:117] "RemoveContainer" containerID="ad1f906b34fddd34efd9d099479cd1fe7404da4ab11f3f571f9e3120167505a5" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.300425 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8mtkt"] Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.301293 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovn-controller" containerID="cri-o://792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8" gracePeriod=30 Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.301657 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="sbdb" containerID="cri-o://4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715" gracePeriod=30 Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.301731 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="nbdb" containerID="cri-o://6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db" gracePeriod=30 Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.301767 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="northd" containerID="cri-o://c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20" gracePeriod=30 Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.301796 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b" gracePeriod=30 Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.301822 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="kube-rbac-proxy-node" containerID="cri-o://8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a" gracePeriod=30 Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.301849 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovn-acl-logging" containerID="cri-o://075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81" gracePeriod=30 Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.342319 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" containerID="cri-o://b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4" gracePeriod=30 Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.626850 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/3.log" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.629105 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovn-acl-logging/0.log" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.629617 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovn-controller/0.log" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.630114 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681501 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bqvt7"] Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.681715 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="kube-rbac-proxy-node" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681733 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="kube-rbac-proxy-node" Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.681748 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac602fa-14af-4ae0-a538-d73e938db036" containerName="registry" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681754 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac602fa-14af-4ae0-a538-d73e938db036" containerName="registry" Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.681764 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="nbdb" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681770 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="nbdb" Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.681780 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681787 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.681794 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681800 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.681807 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovn-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681813 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovn-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.681820 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681826 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.681832 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovn-acl-logging" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681838 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovn-acl-logging" Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.681845 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="kubecfg-setup" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681851 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="kubecfg-setup" Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.681859 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681865 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.681871 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="sbdb" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681877 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="sbdb" Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.681883 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="northd" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681889 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="northd" Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.681898 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.681904 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682000 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="sbdb" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682012 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="nbdb" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682020 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovn-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682029 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682037 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovn-acl-logging" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682046 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682053 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682061 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682072 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac602fa-14af-4ae0-a538-d73e938db036" containerName="registry" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682080 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682089 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="kube-rbac-proxy-node" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682099 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="northd" Jan 31 09:10:07 crc kubenswrapper[4732]: E0131 09:10:07.682197 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682207 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.682290 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerName="ovnkube-controller" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.683781 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770013 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-cni-netd\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770107 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-slash\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770159 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770207 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovnkube-config\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770229 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-slash" (OuterVolumeSpecName: "host-slash") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770258 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-kubelet\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770321 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-systemd\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770373 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770382 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770446 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-env-overrides\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770487 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-openvswitch\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770547 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jktvz\" (UniqueName: \"kubernetes.io/projected/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-kube-api-access-jktvz\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770598 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-var-lib-openvswitch\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770537 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770588 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770638 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-run-netns\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770732 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770762 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovnkube-script-lib\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770789 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-run-ovn-kubernetes\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770806 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-node-log\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770807 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770870 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-cni-bin\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770867 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770893 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovn-node-metrics-cert\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770911 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-systemd-units\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770919 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-node-log" (OuterVolumeSpecName: "node-log") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770927 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-log-socket\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770950 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-log-socket" (OuterVolumeSpecName: "log-socket") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770976 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.770994 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-ovn\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771015 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771014 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771040 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-etc-openvswitch\") pod \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\" (UID: \"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8\") " Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771045 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771082 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771144 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771203 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771240 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-run-netns\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771277 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-systemd-units\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771302 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-slash\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771322 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-ovnkube-config\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771391 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrxqq\" (UniqueName: \"kubernetes.io/projected/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-kube-api-access-mrxqq\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771421 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-kubelet\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771466 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-run-openvswitch\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771487 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-ovn-node-metrics-cert\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.771653 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-cni-netd\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772082 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-ovnkube-script-lib\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772202 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-etc-openvswitch\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772307 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-log-socket\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772360 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772410 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-env-overrides\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772469 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772499 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-run-ovn\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772557 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-var-lib-openvswitch\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772578 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-node-log\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772603 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-run-systemd\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772624 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-cni-bin\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772754 4732 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772771 4732 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772784 4732 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772795 4732 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772809 4732 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772820 4732 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772831 4732 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772844 4732 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772855 4732 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772866 4732 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772878 4732 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772889 4732 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772901 4732 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772914 4732 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772951 4732 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772963 4732 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.772973 4732 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.776097 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-kube-api-access-jktvz" (OuterVolumeSpecName: "kube-api-access-jktvz") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "kube-api-access-jktvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.777328 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.785735 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" (UID: "82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875461 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrxqq\" (UniqueName: \"kubernetes.io/projected/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-kube-api-access-mrxqq\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875535 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-kubelet\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875589 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-run-openvswitch\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875609 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-ovn-node-metrics-cert\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875652 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-cni-netd\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875683 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-ovnkube-script-lib\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875704 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-etc-openvswitch\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875730 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-log-socket\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875728 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-kubelet\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875787 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875748 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875812 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-cni-netd\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875830 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-env-overrides\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875847 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-run-openvswitch\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875862 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875887 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-run-ovn\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875915 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-var-lib-openvswitch\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875931 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-node-log\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875925 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-etc-openvswitch\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875963 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-run-systemd\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875946 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-run-systemd\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.875988 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-log-socket\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876002 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-cni-bin\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876036 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-run-netns\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876065 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-systemd-units\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876093 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-slash\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876120 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-ovnkube-config\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876217 4732 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876230 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jktvz\" (UniqueName: \"kubernetes.io/projected/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-kube-api-access-jktvz\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876240 4732 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876455 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-run-ovn\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876494 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876520 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-run-netns\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876544 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-var-lib-openvswitch\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876589 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-systemd-units\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876623 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-slash\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876626 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-node-log\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876683 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-env-overrides\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876709 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-host-cni-bin\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.876975 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-ovnkube-config\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.877106 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-ovnkube-script-lib\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.879136 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-ovn-node-metrics-cert\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:07 crc kubenswrapper[4732]: I0131 09:10:07.889994 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrxqq\" (UniqueName: \"kubernetes.io/projected/9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5-kube-api-access-mrxqq\") pod \"ovnkube-node-bqvt7\" (UID: \"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.006259 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.031228 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" event={"ID":"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5","Type":"ContainerStarted","Data":"665c5e4eed75a7701e236236d51ecb4c2a2ec042f26ac339fa76cfae9e3def62"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.033767 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovnkube-controller/3.log" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.037325 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovn-acl-logging/0.log" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.037987 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mtkt_82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/ovn-controller/0.log" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038346 4732 generic.go:334] "Generic (PLEG): container finished" podID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerID="b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4" exitCode=0 Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038377 4732 generic.go:334] "Generic (PLEG): container finished" podID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerID="4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715" exitCode=0 Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038387 4732 generic.go:334] "Generic (PLEG): container finished" podID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerID="6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db" exitCode=0 Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038399 4732 generic.go:334] "Generic (PLEG): container finished" podID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerID="c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20" exitCode=0 Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038408 4732 generic.go:334] "Generic (PLEG): container finished" podID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerID="8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b" exitCode=0 Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038419 4732 generic.go:334] "Generic (PLEG): container finished" podID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerID="8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a" exitCode=0 Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038430 4732 generic.go:334] "Generic (PLEG): container finished" podID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerID="075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81" exitCode=143 Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038439 4732 generic.go:334] "Generic (PLEG): container finished" podID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" containerID="792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8" exitCode=143 Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038432 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038484 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038514 4732 scope.go:117] "RemoveContainer" containerID="b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038488 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038597 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038615 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038627 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038640 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038653 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038689 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038697 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038704 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038711 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038720 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038727 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038734 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038741 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038752 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038765 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038774 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038782 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038790 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038797 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038804 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038811 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038818 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038824 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038832 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038841 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038853 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038863 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038870 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038879 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038886 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038894 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038901 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038908 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038915 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038922 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038932 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mtkt" event={"ID":"82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8","Type":"ContainerDied","Data":"8ed5be886bc7763adb1d7a0a054a6dd73cde6a707faa32148f1f5ddc889335e4"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038943 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038952 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038959 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038967 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038974 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038981 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038988 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.038994 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.039001 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.039009 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.043045 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4mxsr_8e23192f-14db-41ef-af89-4a76e325d9c1/kube-multus/2.log" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.043610 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4mxsr_8e23192f-14db-41ef-af89-4a76e325d9c1/kube-multus/1.log" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.043652 4732 generic.go:334] "Generic (PLEG): container finished" podID="8e23192f-14db-41ef-af89-4a76e325d9c1" containerID="98e5c23e9a8bde55626defc76f20f8510954c4ef79d762950e0790a7de4dce4f" exitCode=2 Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.043727 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4mxsr" event={"ID":"8e23192f-14db-41ef-af89-4a76e325d9c1","Type":"ContainerDied","Data":"98e5c23e9a8bde55626defc76f20f8510954c4ef79d762950e0790a7de4dce4f"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.043750 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"456969ec8d447fd7f9acd1803b222ebfb16b02c8dee1959dd936eec29aa1d617"} Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.044356 4732 scope.go:117] "RemoveContainer" containerID="98e5c23e9a8bde55626defc76f20f8510954c4ef79d762950e0790a7de4dce4f" Jan 31 09:10:08 crc kubenswrapper[4732]: E0131 09:10:08.044581 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4mxsr_openshift-multus(8e23192f-14db-41ef-af89-4a76e325d9c1)\"" pod="openshift-multus/multus-4mxsr" podUID="8e23192f-14db-41ef-af89-4a76e325d9c1" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.090598 4732 scope.go:117] "RemoveContainer" containerID="b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.100662 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8mtkt"] Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.105452 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8mtkt"] Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.114140 4732 scope.go:117] "RemoveContainer" containerID="4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.125626 4732 scope.go:117] "RemoveContainer" containerID="6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.136466 4732 scope.go:117] "RemoveContainer" containerID="c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.146687 4732 scope.go:117] "RemoveContainer" containerID="8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.159220 4732 scope.go:117] "RemoveContainer" containerID="8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.170253 4732 scope.go:117] "RemoveContainer" containerID="075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.181203 4732 scope.go:117] "RemoveContainer" containerID="792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.192370 4732 scope.go:117] "RemoveContainer" containerID="cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.203654 4732 scope.go:117] "RemoveContainer" containerID="b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4" Jan 31 09:10:08 crc kubenswrapper[4732]: E0131 09:10:08.204080 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4\": container with ID starting with b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4 not found: ID does not exist" containerID="b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.204126 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4"} err="failed to get container status \"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4\": rpc error: code = NotFound desc = could not find container \"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4\": container with ID starting with b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.204157 4732 scope.go:117] "RemoveContainer" containerID="b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1" Jan 31 09:10:08 crc kubenswrapper[4732]: E0131 09:10:08.204415 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1\": container with ID starting with b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1 not found: ID does not exist" containerID="b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.204465 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1"} err="failed to get container status \"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1\": rpc error: code = NotFound desc = could not find container \"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1\": container with ID starting with b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.204503 4732 scope.go:117] "RemoveContainer" containerID="4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715" Jan 31 09:10:08 crc kubenswrapper[4732]: E0131 09:10:08.204950 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\": container with ID starting with 4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715 not found: ID does not exist" containerID="4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.204980 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715"} err="failed to get container status \"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\": rpc error: code = NotFound desc = could not find container \"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\": container with ID starting with 4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.205001 4732 scope.go:117] "RemoveContainer" containerID="6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db" Jan 31 09:10:08 crc kubenswrapper[4732]: E0131 09:10:08.205239 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\": container with ID starting with 6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db not found: ID does not exist" containerID="6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.205281 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db"} err="failed to get container status \"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\": rpc error: code = NotFound desc = could not find container \"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\": container with ID starting with 6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.205308 4732 scope.go:117] "RemoveContainer" containerID="c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20" Jan 31 09:10:08 crc kubenswrapper[4732]: E0131 09:10:08.205589 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\": container with ID starting with c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20 not found: ID does not exist" containerID="c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.205626 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20"} err="failed to get container status \"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\": rpc error: code = NotFound desc = could not find container \"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\": container with ID starting with c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.205648 4732 scope.go:117] "RemoveContainer" containerID="8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b" Jan 31 09:10:08 crc kubenswrapper[4732]: E0131 09:10:08.205918 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\": container with ID starting with 8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b not found: ID does not exist" containerID="8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.205941 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b"} err="failed to get container status \"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\": rpc error: code = NotFound desc = could not find container \"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\": container with ID starting with 8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.205954 4732 scope.go:117] "RemoveContainer" containerID="8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a" Jan 31 09:10:08 crc kubenswrapper[4732]: E0131 09:10:08.206150 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\": container with ID starting with 8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a not found: ID does not exist" containerID="8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.206170 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a"} err="failed to get container status \"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\": rpc error: code = NotFound desc = could not find container \"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\": container with ID starting with 8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.206189 4732 scope.go:117] "RemoveContainer" containerID="075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81" Jan 31 09:10:08 crc kubenswrapper[4732]: E0131 09:10:08.206459 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\": container with ID starting with 075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81 not found: ID does not exist" containerID="075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.206486 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81"} err="failed to get container status \"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\": rpc error: code = NotFound desc = could not find container \"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\": container with ID starting with 075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.206502 4732 scope.go:117] "RemoveContainer" containerID="792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8" Jan 31 09:10:08 crc kubenswrapper[4732]: E0131 09:10:08.206729 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\": container with ID starting with 792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8 not found: ID does not exist" containerID="792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.206762 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8"} err="failed to get container status \"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\": rpc error: code = NotFound desc = could not find container \"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\": container with ID starting with 792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.206782 4732 scope.go:117] "RemoveContainer" containerID="cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193" Jan 31 09:10:08 crc kubenswrapper[4732]: E0131 09:10:08.207070 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\": container with ID starting with cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193 not found: ID does not exist" containerID="cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.207095 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193"} err="failed to get container status \"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\": rpc error: code = NotFound desc = could not find container \"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\": container with ID starting with cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.207111 4732 scope.go:117] "RemoveContainer" containerID="b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.207348 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4"} err="failed to get container status \"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4\": rpc error: code = NotFound desc = could not find container \"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4\": container with ID starting with b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.207366 4732 scope.go:117] "RemoveContainer" containerID="b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.207621 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1"} err="failed to get container status \"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1\": rpc error: code = NotFound desc = could not find container \"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1\": container with ID starting with b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.207644 4732 scope.go:117] "RemoveContainer" containerID="4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.207858 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715"} err="failed to get container status \"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\": rpc error: code = NotFound desc = could not find container \"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\": container with ID starting with 4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.207882 4732 scope.go:117] "RemoveContainer" containerID="6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.208118 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db"} err="failed to get container status \"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\": rpc error: code = NotFound desc = could not find container \"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\": container with ID starting with 6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.208167 4732 scope.go:117] "RemoveContainer" containerID="c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.208402 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20"} err="failed to get container status \"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\": rpc error: code = NotFound desc = could not find container \"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\": container with ID starting with c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.208424 4732 scope.go:117] "RemoveContainer" containerID="8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.208624 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b"} err="failed to get container status \"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\": rpc error: code = NotFound desc = could not find container \"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\": container with ID starting with 8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.208646 4732 scope.go:117] "RemoveContainer" containerID="8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.208859 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a"} err="failed to get container status \"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\": rpc error: code = NotFound desc = could not find container \"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\": container with ID starting with 8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.208881 4732 scope.go:117] "RemoveContainer" containerID="075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.209105 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81"} err="failed to get container status \"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\": rpc error: code = NotFound desc = could not find container \"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\": container with ID starting with 075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.209126 4732 scope.go:117] "RemoveContainer" containerID="792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.209405 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8"} err="failed to get container status \"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\": rpc error: code = NotFound desc = could not find container \"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\": container with ID starting with 792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.209439 4732 scope.go:117] "RemoveContainer" containerID="cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.209686 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193"} err="failed to get container status \"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\": rpc error: code = NotFound desc = could not find container \"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\": container with ID starting with cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.209713 4732 scope.go:117] "RemoveContainer" containerID="b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.210012 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4"} err="failed to get container status \"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4\": rpc error: code = NotFound desc = could not find container \"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4\": container with ID starting with b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.210043 4732 scope.go:117] "RemoveContainer" containerID="b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.210297 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1"} err="failed to get container status \"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1\": rpc error: code = NotFound desc = could not find container \"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1\": container with ID starting with b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.210321 4732 scope.go:117] "RemoveContainer" containerID="4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.210537 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715"} err="failed to get container status \"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\": rpc error: code = NotFound desc = could not find container \"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\": container with ID starting with 4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.210564 4732 scope.go:117] "RemoveContainer" containerID="6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.210827 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db"} err="failed to get container status \"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\": rpc error: code = NotFound desc = could not find container \"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\": container with ID starting with 6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.210851 4732 scope.go:117] "RemoveContainer" containerID="c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.211109 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20"} err="failed to get container status \"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\": rpc error: code = NotFound desc = could not find container \"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\": container with ID starting with c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.211138 4732 scope.go:117] "RemoveContainer" containerID="8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.211501 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b"} err="failed to get container status \"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\": rpc error: code = NotFound desc = could not find container \"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\": container with ID starting with 8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.211526 4732 scope.go:117] "RemoveContainer" containerID="8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.211845 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a"} err="failed to get container status \"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\": rpc error: code = NotFound desc = could not find container \"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\": container with ID starting with 8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.211874 4732 scope.go:117] "RemoveContainer" containerID="075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.212109 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81"} err="failed to get container status \"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\": rpc error: code = NotFound desc = could not find container \"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\": container with ID starting with 075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.212146 4732 scope.go:117] "RemoveContainer" containerID="792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.212454 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8"} err="failed to get container status \"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\": rpc error: code = NotFound desc = could not find container \"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\": container with ID starting with 792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.212474 4732 scope.go:117] "RemoveContainer" containerID="cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.212788 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193"} err="failed to get container status \"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\": rpc error: code = NotFound desc = could not find container \"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\": container with ID starting with cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.212827 4732 scope.go:117] "RemoveContainer" containerID="b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.213179 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4"} err="failed to get container status \"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4\": rpc error: code = NotFound desc = could not find container \"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4\": container with ID starting with b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.213220 4732 scope.go:117] "RemoveContainer" containerID="b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.213513 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1"} err="failed to get container status \"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1\": rpc error: code = NotFound desc = could not find container \"b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1\": container with ID starting with b77508f48f518644f99bdda577737a5ed3b909f97a70c1c297c035609ba46fa1 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.213551 4732 scope.go:117] "RemoveContainer" containerID="4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.213970 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715"} err="failed to get container status \"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\": rpc error: code = NotFound desc = could not find container \"4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715\": container with ID starting with 4cd519944f0ae0ee9920a6a54de3dd40c73a65d0c907f625c57d7648a6d0a715 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.214042 4732 scope.go:117] "RemoveContainer" containerID="6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.214331 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db"} err="failed to get container status \"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\": rpc error: code = NotFound desc = could not find container \"6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db\": container with ID starting with 6b26417bd7817c984a59723e2231e3a43c554022aef67eacf300adee0166c3db not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.214364 4732 scope.go:117] "RemoveContainer" containerID="c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.214685 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20"} err="failed to get container status \"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\": rpc error: code = NotFound desc = could not find container \"c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20\": container with ID starting with c79f404e5bc3d6dc83681d68605b1f26565813ccec34c6d36ee9f21686ca4f20 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.214711 4732 scope.go:117] "RemoveContainer" containerID="8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.214955 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b"} err="failed to get container status \"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\": rpc error: code = NotFound desc = could not find container \"8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b\": container with ID starting with 8fde05e46badc11b2174b88e2590402436bb669bcbd017ceb4d11d25066b421b not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.214997 4732 scope.go:117] "RemoveContainer" containerID="8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.215214 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a"} err="failed to get container status \"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\": rpc error: code = NotFound desc = could not find container \"8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a\": container with ID starting with 8521ee1e84f540faa7b093ecd0bb7761ae38487a6c7abf52028fe820ea9a4c1a not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.215268 4732 scope.go:117] "RemoveContainer" containerID="075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.215548 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81"} err="failed to get container status \"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\": rpc error: code = NotFound desc = could not find container \"075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81\": container with ID starting with 075418297d972e23003b3ce2016f7709d138316c9d08837dc484dadf4d7f4a81 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.215574 4732 scope.go:117] "RemoveContainer" containerID="792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.215952 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8"} err="failed to get container status \"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\": rpc error: code = NotFound desc = could not find container \"792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8\": container with ID starting with 792e702a9fa66ea68366e8f88d976f7e280223815c4b70258566c32faaed57a8 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.215977 4732 scope.go:117] "RemoveContainer" containerID="cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.216318 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193"} err="failed to get container status \"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\": rpc error: code = NotFound desc = could not find container \"cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193\": container with ID starting with cfb08df78497d5ae0eb39af9a06378ea80914d9b4d7014dd85d930f54c733193 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.216358 4732 scope.go:117] "RemoveContainer" containerID="b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.216714 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4"} err="failed to get container status \"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4\": rpc error: code = NotFound desc = could not find container \"b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4\": container with ID starting with b950d29df580418b6e7082c9a672624c14f714944ccbf4cd1d3824810d885ac4 not found: ID does not exist" Jan 31 09:10:08 crc kubenswrapper[4732]: I0131 09:10:08.554711 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8" path="/var/lib/kubelet/pods/82d07e8c-9b7d-43fb-ac36-f6c87d08e2c8/volumes" Jan 31 09:10:09 crc kubenswrapper[4732]: I0131 09:10:09.050937 4732 generic.go:334] "Generic (PLEG): container finished" podID="9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5" containerID="a5ff595b08c6520ee5972e06c7feb56e18375d97f3fb9ac92996609011138b32" exitCode=0 Jan 31 09:10:09 crc kubenswrapper[4732]: I0131 09:10:09.051003 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" event={"ID":"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5","Type":"ContainerDied","Data":"a5ff595b08c6520ee5972e06c7feb56e18375d97f3fb9ac92996609011138b32"} Jan 31 09:10:10 crc kubenswrapper[4732]: I0131 09:10:10.059666 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" event={"ID":"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5","Type":"ContainerStarted","Data":"60d12f3e0343937b71ab044b3a53e9f8522e1a6557ce42cdc08adf4be81b8603"} Jan 31 09:10:10 crc kubenswrapper[4732]: I0131 09:10:10.060265 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" event={"ID":"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5","Type":"ContainerStarted","Data":"48385875afd5dbd56191f18499cecf42addff2323fdceb76f9190a506b6833bd"} Jan 31 09:10:10 crc kubenswrapper[4732]: I0131 09:10:10.060280 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" event={"ID":"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5","Type":"ContainerStarted","Data":"0c3c896b657c02178bdb1189e62a629663ed2408a8325116c175852928548768"} Jan 31 09:10:10 crc kubenswrapper[4732]: I0131 09:10:10.060293 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" event={"ID":"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5","Type":"ContainerStarted","Data":"fc268c48d5b8f4a59fdf70f765783e66e5230a80929f88f42499217455dff299"} Jan 31 09:10:11 crc kubenswrapper[4732]: I0131 09:10:11.067009 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" event={"ID":"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5","Type":"ContainerStarted","Data":"159aeda61e93a43c7aa035ca8ebd5fa16fb985dadbd04937f0281edd763f08fa"} Jan 31 09:10:11 crc kubenswrapper[4732]: I0131 09:10:11.067055 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" event={"ID":"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5","Type":"ContainerStarted","Data":"59af69892b2784be32c790368744856b0a20ba343e6c2f5de661a8f6eb56760b"} Jan 31 09:10:13 crc kubenswrapper[4732]: I0131 09:10:13.077834 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" event={"ID":"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5","Type":"ContainerStarted","Data":"049127f38ff4519d495e4383644da5148f88578ca5c97ca910a987c39b795f4a"} Jan 31 09:10:15 crc kubenswrapper[4732]: I0131 09:10:15.096340 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" event={"ID":"9faf3ee0-47c7-4c3e-85cb-d8800f2e26d5","Type":"ContainerStarted","Data":"507caade8ce1849cd8a7ba9cc1874992055408592aa10cb52481ed404e1acd1a"} Jan 31 09:10:15 crc kubenswrapper[4732]: I0131 09:10:15.096953 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:15 crc kubenswrapper[4732]: I0131 09:10:15.097119 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:15 crc kubenswrapper[4732]: I0131 09:10:15.097230 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:15 crc kubenswrapper[4732]: I0131 09:10:15.122173 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:15 crc kubenswrapper[4732]: I0131 09:10:15.122900 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:15 crc kubenswrapper[4732]: I0131 09:10:15.129799 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" podStartSLOduration=8.129781548 podStartE2EDuration="8.129781548s" podCreationTimestamp="2026-01-31 09:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:10:15.124515409 +0000 UTC m=+553.430391623" watchObservedRunningTime="2026-01-31 09:10:15.129781548 +0000 UTC m=+553.435657752" Jan 31 09:10:17 crc kubenswrapper[4732]: I0131 09:10:17.498575 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:10:17 crc kubenswrapper[4732]: I0131 09:10:17.498898 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:10:20 crc kubenswrapper[4732]: I0131 09:10:20.542703 4732 scope.go:117] "RemoveContainer" containerID="98e5c23e9a8bde55626defc76f20f8510954c4ef79d762950e0790a7de4dce4f" Jan 31 09:10:20 crc kubenswrapper[4732]: E0131 09:10:20.543456 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4mxsr_openshift-multus(8e23192f-14db-41ef-af89-4a76e325d9c1)\"" pod="openshift-multus/multus-4mxsr" podUID="8e23192f-14db-41ef-af89-4a76e325d9c1" Jan 31 09:10:31 crc kubenswrapper[4732]: I0131 09:10:31.544327 4732 scope.go:117] "RemoveContainer" containerID="98e5c23e9a8bde55626defc76f20f8510954c4ef79d762950e0790a7de4dce4f" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.191282 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4mxsr_8e23192f-14db-41ef-af89-4a76e325d9c1/kube-multus/2.log" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.192090 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4mxsr_8e23192f-14db-41ef-af89-4a76e325d9c1/kube-multus/1.log" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.192210 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4mxsr" event={"ID":"8e23192f-14db-41ef-af89-4a76e325d9c1","Type":"ContainerStarted","Data":"9a91cfdec82d25573bcc6a3131e5bad59d02bdc0a8b1943a4c0deb55c924fbce"} Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.564293 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v"] Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.565589 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.568298 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.576984 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v"] Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.727316 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76f99e73-f72c-4026-b43f-dcb9f20b554f-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v\" (UID: \"76f99e73-f72c-4026-b43f-dcb9f20b554f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.727367 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76f99e73-f72c-4026-b43f-dcb9f20b554f-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v\" (UID: \"76f99e73-f72c-4026-b43f-dcb9f20b554f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.727427 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j52vg\" (UniqueName: \"kubernetes.io/projected/76f99e73-f72c-4026-b43f-dcb9f20b554f-kube-api-access-j52vg\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v\" (UID: \"76f99e73-f72c-4026-b43f-dcb9f20b554f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.828567 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76f99e73-f72c-4026-b43f-dcb9f20b554f-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v\" (UID: \"76f99e73-f72c-4026-b43f-dcb9f20b554f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.828606 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76f99e73-f72c-4026-b43f-dcb9f20b554f-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v\" (UID: \"76f99e73-f72c-4026-b43f-dcb9f20b554f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.828655 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j52vg\" (UniqueName: \"kubernetes.io/projected/76f99e73-f72c-4026-b43f-dcb9f20b554f-kube-api-access-j52vg\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v\" (UID: \"76f99e73-f72c-4026-b43f-dcb9f20b554f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.829342 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76f99e73-f72c-4026-b43f-dcb9f20b554f-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v\" (UID: \"76f99e73-f72c-4026-b43f-dcb9f20b554f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.829579 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76f99e73-f72c-4026-b43f-dcb9f20b554f-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v\" (UID: \"76f99e73-f72c-4026-b43f-dcb9f20b554f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.868873 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j52vg\" (UniqueName: \"kubernetes.io/projected/76f99e73-f72c-4026-b43f-dcb9f20b554f-kube-api-access-j52vg\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v\" (UID: \"76f99e73-f72c-4026-b43f-dcb9f20b554f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:32 crc kubenswrapper[4732]: I0131 09:10:32.931749 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:32 crc kubenswrapper[4732]: E0131 09:10:32.961555 4732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_openshift-marketplace_76f99e73-f72c-4026-b43f-dcb9f20b554f_0(24882b4407469c82f2a582f0e24282bc4819fd3a1612e09c4fe2a3dc3ca73be1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 09:10:32 crc kubenswrapper[4732]: E0131 09:10:32.961644 4732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_openshift-marketplace_76f99e73-f72c-4026-b43f-dcb9f20b554f_0(24882b4407469c82f2a582f0e24282bc4819fd3a1612e09c4fe2a3dc3ca73be1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:32 crc kubenswrapper[4732]: E0131 09:10:32.961694 4732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_openshift-marketplace_76f99e73-f72c-4026-b43f-dcb9f20b554f_0(24882b4407469c82f2a582f0e24282bc4819fd3a1612e09c4fe2a3dc3ca73be1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:32 crc kubenswrapper[4732]: E0131 09:10:32.961764 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_openshift-marketplace(76f99e73-f72c-4026-b43f-dcb9f20b554f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_openshift-marketplace(76f99e73-f72c-4026-b43f-dcb9f20b554f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_openshift-marketplace_76f99e73-f72c-4026-b43f-dcb9f20b554f_0(24882b4407469c82f2a582f0e24282bc4819fd3a1612e09c4fe2a3dc3ca73be1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" podUID="76f99e73-f72c-4026-b43f-dcb9f20b554f" Jan 31 09:10:33 crc kubenswrapper[4732]: I0131 09:10:33.197647 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:33 crc kubenswrapper[4732]: I0131 09:10:33.198607 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:33 crc kubenswrapper[4732]: I0131 09:10:33.615776 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v"] Jan 31 09:10:33 crc kubenswrapper[4732]: W0131 09:10:33.620055 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76f99e73_f72c_4026_b43f_dcb9f20b554f.slice/crio-2775342a40bcc83af9f040ed925ff063f4750e23fdebac2a47fe9858393cb91d WatchSource:0}: Error finding container 2775342a40bcc83af9f040ed925ff063f4750e23fdebac2a47fe9858393cb91d: Status 404 returned error can't find the container with id 2775342a40bcc83af9f040ed925ff063f4750e23fdebac2a47fe9858393cb91d Jan 31 09:10:34 crc kubenswrapper[4732]: I0131 09:10:34.207742 4732 generic.go:334] "Generic (PLEG): container finished" podID="76f99e73-f72c-4026-b43f-dcb9f20b554f" containerID="de1621a69fc5d3d3456ecdb4b2b3f0c8f4de38f8b1d1931d24ca267370544f6b" exitCode=0 Jan 31 09:10:34 crc kubenswrapper[4732]: I0131 09:10:34.208079 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" event={"ID":"76f99e73-f72c-4026-b43f-dcb9f20b554f","Type":"ContainerDied","Data":"de1621a69fc5d3d3456ecdb4b2b3f0c8f4de38f8b1d1931d24ca267370544f6b"} Jan 31 09:10:34 crc kubenswrapper[4732]: I0131 09:10:34.208125 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" event={"ID":"76f99e73-f72c-4026-b43f-dcb9f20b554f","Type":"ContainerStarted","Data":"2775342a40bcc83af9f040ed925ff063f4750e23fdebac2a47fe9858393cb91d"} Jan 31 09:10:34 crc kubenswrapper[4732]: I0131 09:10:34.210622 4732 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:10:36 crc kubenswrapper[4732]: I0131 09:10:36.222711 4732 generic.go:334] "Generic (PLEG): container finished" podID="76f99e73-f72c-4026-b43f-dcb9f20b554f" containerID="a4915083fdb38c4a25660c8d0524c37bb3cd9ac78c2e6fcedac98a56c8c39103" exitCode=0 Jan 31 09:10:36 crc kubenswrapper[4732]: I0131 09:10:36.222825 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" event={"ID":"76f99e73-f72c-4026-b43f-dcb9f20b554f","Type":"ContainerDied","Data":"a4915083fdb38c4a25660c8d0524c37bb3cd9ac78c2e6fcedac98a56c8c39103"} Jan 31 09:10:37 crc kubenswrapper[4732]: I0131 09:10:37.230183 4732 generic.go:334] "Generic (PLEG): container finished" podID="76f99e73-f72c-4026-b43f-dcb9f20b554f" containerID="da05d5940c342f32c8a6aacbf8ae5641047149294a9be2777a763f6be35278b7" exitCode=0 Jan 31 09:10:37 crc kubenswrapper[4732]: I0131 09:10:37.230293 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" event={"ID":"76f99e73-f72c-4026-b43f-dcb9f20b554f","Type":"ContainerDied","Data":"da05d5940c342f32c8a6aacbf8ae5641047149294a9be2777a763f6be35278b7"} Jan 31 09:10:38 crc kubenswrapper[4732]: I0131 09:10:38.029665 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bqvt7" Jan 31 09:10:38 crc kubenswrapper[4732]: I0131 09:10:38.475763 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:38 crc kubenswrapper[4732]: I0131 09:10:38.596720 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76f99e73-f72c-4026-b43f-dcb9f20b554f-util\") pod \"76f99e73-f72c-4026-b43f-dcb9f20b554f\" (UID: \"76f99e73-f72c-4026-b43f-dcb9f20b554f\") " Jan 31 09:10:38 crc kubenswrapper[4732]: I0131 09:10:38.596816 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j52vg\" (UniqueName: \"kubernetes.io/projected/76f99e73-f72c-4026-b43f-dcb9f20b554f-kube-api-access-j52vg\") pod \"76f99e73-f72c-4026-b43f-dcb9f20b554f\" (UID: \"76f99e73-f72c-4026-b43f-dcb9f20b554f\") " Jan 31 09:10:38 crc kubenswrapper[4732]: I0131 09:10:38.597122 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76f99e73-f72c-4026-b43f-dcb9f20b554f-bundle\") pod \"76f99e73-f72c-4026-b43f-dcb9f20b554f\" (UID: \"76f99e73-f72c-4026-b43f-dcb9f20b554f\") " Jan 31 09:10:38 crc kubenswrapper[4732]: I0131 09:10:38.599265 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76f99e73-f72c-4026-b43f-dcb9f20b554f-bundle" (OuterVolumeSpecName: "bundle") pod "76f99e73-f72c-4026-b43f-dcb9f20b554f" (UID: "76f99e73-f72c-4026-b43f-dcb9f20b554f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:10:38 crc kubenswrapper[4732]: I0131 09:10:38.601868 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76f99e73-f72c-4026-b43f-dcb9f20b554f-kube-api-access-j52vg" (OuterVolumeSpecName: "kube-api-access-j52vg") pod "76f99e73-f72c-4026-b43f-dcb9f20b554f" (UID: "76f99e73-f72c-4026-b43f-dcb9f20b554f"). InnerVolumeSpecName "kube-api-access-j52vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:10:38 crc kubenswrapper[4732]: I0131 09:10:38.611056 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76f99e73-f72c-4026-b43f-dcb9f20b554f-util" (OuterVolumeSpecName: "util") pod "76f99e73-f72c-4026-b43f-dcb9f20b554f" (UID: "76f99e73-f72c-4026-b43f-dcb9f20b554f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:10:38 crc kubenswrapper[4732]: I0131 09:10:38.698879 4732 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76f99e73-f72c-4026-b43f-dcb9f20b554f-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:38 crc kubenswrapper[4732]: I0131 09:10:38.698921 4732 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76f99e73-f72c-4026-b43f-dcb9f20b554f-util\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:38 crc kubenswrapper[4732]: I0131 09:10:38.698930 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j52vg\" (UniqueName: \"kubernetes.io/projected/76f99e73-f72c-4026-b43f-dcb9f20b554f-kube-api-access-j52vg\") on node \"crc\" DevicePath \"\"" Jan 31 09:10:39 crc kubenswrapper[4732]: I0131 09:10:39.243234 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" event={"ID":"76f99e73-f72c-4026-b43f-dcb9f20b554f","Type":"ContainerDied","Data":"2775342a40bcc83af9f040ed925ff063f4750e23fdebac2a47fe9858393cb91d"} Jan 31 09:10:39 crc kubenswrapper[4732]: I0131 09:10:39.243279 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2775342a40bcc83af9f040ed925ff063f4750e23fdebac2a47fe9858393cb91d" Jan 31 09:10:39 crc kubenswrapper[4732]: I0131 09:10:39.243287 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v" Jan 31 09:10:47 crc kubenswrapper[4732]: I0131 09:10:47.497642 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:10:47 crc kubenswrapper[4732]: I0131 09:10:47.498230 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.638432 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn"] Jan 31 09:10:50 crc kubenswrapper[4732]: E0131 09:10:50.638914 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f99e73-f72c-4026-b43f-dcb9f20b554f" containerName="util" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.638934 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f99e73-f72c-4026-b43f-dcb9f20b554f" containerName="util" Jan 31 09:10:50 crc kubenswrapper[4732]: E0131 09:10:50.638954 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f99e73-f72c-4026-b43f-dcb9f20b554f" containerName="pull" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.638960 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f99e73-f72c-4026-b43f-dcb9f20b554f" containerName="pull" Jan 31 09:10:50 crc kubenswrapper[4732]: E0131 09:10:50.638968 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f99e73-f72c-4026-b43f-dcb9f20b554f" containerName="extract" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.638976 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f99e73-f72c-4026-b43f-dcb9f20b554f" containerName="extract" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.639073 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="76f99e73-f72c-4026-b43f-dcb9f20b554f" containerName="extract" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.639502 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.641785 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.642755 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.642886 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.643001 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-lg8wh" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.643159 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.644452 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ca218dd-0d42-45c8-b4e4-ca638781c915-webhook-cert\") pod \"metallb-operator-controller-manager-6d8dc66c8b-8p2mn\" (UID: \"8ca218dd-0d42-45c8-b4e4-ca638781c915\") " pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.644505 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ca218dd-0d42-45c8-b4e4-ca638781c915-apiservice-cert\") pod \"metallb-operator-controller-manager-6d8dc66c8b-8p2mn\" (UID: \"8ca218dd-0d42-45c8-b4e4-ca638781c915\") " pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.644690 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knrcd\" (UniqueName: \"kubernetes.io/projected/8ca218dd-0d42-45c8-b4e4-ca638781c915-kube-api-access-knrcd\") pod \"metallb-operator-controller-manager-6d8dc66c8b-8p2mn\" (UID: \"8ca218dd-0d42-45c8-b4e4-ca638781c915\") " pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.680169 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn"] Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.745867 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knrcd\" (UniqueName: \"kubernetes.io/projected/8ca218dd-0d42-45c8-b4e4-ca638781c915-kube-api-access-knrcd\") pod \"metallb-operator-controller-manager-6d8dc66c8b-8p2mn\" (UID: \"8ca218dd-0d42-45c8-b4e4-ca638781c915\") " pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.745935 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ca218dd-0d42-45c8-b4e4-ca638781c915-webhook-cert\") pod \"metallb-operator-controller-manager-6d8dc66c8b-8p2mn\" (UID: \"8ca218dd-0d42-45c8-b4e4-ca638781c915\") " pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.745985 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ca218dd-0d42-45c8-b4e4-ca638781c915-apiservice-cert\") pod \"metallb-operator-controller-manager-6d8dc66c8b-8p2mn\" (UID: \"8ca218dd-0d42-45c8-b4e4-ca638781c915\") " pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.752476 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ca218dd-0d42-45c8-b4e4-ca638781c915-webhook-cert\") pod \"metallb-operator-controller-manager-6d8dc66c8b-8p2mn\" (UID: \"8ca218dd-0d42-45c8-b4e4-ca638781c915\") " pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.752510 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ca218dd-0d42-45c8-b4e4-ca638781c915-apiservice-cert\") pod \"metallb-operator-controller-manager-6d8dc66c8b-8p2mn\" (UID: \"8ca218dd-0d42-45c8-b4e4-ca638781c915\") " pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.763433 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knrcd\" (UniqueName: \"kubernetes.io/projected/8ca218dd-0d42-45c8-b4e4-ca638781c915-kube-api-access-knrcd\") pod \"metallb-operator-controller-manager-6d8dc66c8b-8p2mn\" (UID: \"8ca218dd-0d42-45c8-b4e4-ca638781c915\") " pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" Jan 31 09:10:50 crc kubenswrapper[4732]: I0131 09:10:50.957702 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.034679 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf"] Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.035512 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.038699 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.039194 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.039878 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-xkf4b" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.049183 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq8qj\" (UniqueName: \"kubernetes.io/projected/62f950f6-2a18-4ca6-8cdb-75f47437053a-kube-api-access-vq8qj\") pod \"metallb-operator-webhook-server-c98699f55-6bzdf\" (UID: \"62f950f6-2a18-4ca6-8cdb-75f47437053a\") " pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.049284 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62f950f6-2a18-4ca6-8cdb-75f47437053a-apiservice-cert\") pod \"metallb-operator-webhook-server-c98699f55-6bzdf\" (UID: \"62f950f6-2a18-4ca6-8cdb-75f47437053a\") " pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.049319 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62f950f6-2a18-4ca6-8cdb-75f47437053a-webhook-cert\") pod \"metallb-operator-webhook-server-c98699f55-6bzdf\" (UID: \"62f950f6-2a18-4ca6-8cdb-75f47437053a\") " pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.066484 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf"] Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.150137 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62f950f6-2a18-4ca6-8cdb-75f47437053a-webhook-cert\") pod \"metallb-operator-webhook-server-c98699f55-6bzdf\" (UID: \"62f950f6-2a18-4ca6-8cdb-75f47437053a\") " pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.150242 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq8qj\" (UniqueName: \"kubernetes.io/projected/62f950f6-2a18-4ca6-8cdb-75f47437053a-kube-api-access-vq8qj\") pod \"metallb-operator-webhook-server-c98699f55-6bzdf\" (UID: \"62f950f6-2a18-4ca6-8cdb-75f47437053a\") " pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.150299 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62f950f6-2a18-4ca6-8cdb-75f47437053a-apiservice-cert\") pod \"metallb-operator-webhook-server-c98699f55-6bzdf\" (UID: \"62f950f6-2a18-4ca6-8cdb-75f47437053a\") " pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.160644 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62f950f6-2a18-4ca6-8cdb-75f47437053a-webhook-cert\") pod \"metallb-operator-webhook-server-c98699f55-6bzdf\" (UID: \"62f950f6-2a18-4ca6-8cdb-75f47437053a\") " pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.160653 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62f950f6-2a18-4ca6-8cdb-75f47437053a-apiservice-cert\") pod \"metallb-operator-webhook-server-c98699f55-6bzdf\" (UID: \"62f950f6-2a18-4ca6-8cdb-75f47437053a\") " pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.176747 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq8qj\" (UniqueName: \"kubernetes.io/projected/62f950f6-2a18-4ca6-8cdb-75f47437053a-kube-api-access-vq8qj\") pod \"metallb-operator-webhook-server-c98699f55-6bzdf\" (UID: \"62f950f6-2a18-4ca6-8cdb-75f47437053a\") " pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.294367 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn"] Jan 31 09:10:51 crc kubenswrapper[4732]: W0131 09:10:51.296864 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ca218dd_0d42_45c8_b4e4_ca638781c915.slice/crio-600452c3ec84f04be30346cf95724591d530cd41a06688178f841081efd3d084 WatchSource:0}: Error finding container 600452c3ec84f04be30346cf95724591d530cd41a06688178f841081efd3d084: Status 404 returned error can't find the container with id 600452c3ec84f04be30346cf95724591d530cd41a06688178f841081efd3d084 Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.311213 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" event={"ID":"8ca218dd-0d42-45c8-b4e4-ca638781c915","Type":"ContainerStarted","Data":"600452c3ec84f04be30346cf95724591d530cd41a06688178f841081efd3d084"} Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.355031 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" Jan 31 09:10:51 crc kubenswrapper[4732]: I0131 09:10:51.593499 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf"] Jan 31 09:10:51 crc kubenswrapper[4732]: W0131 09:10:51.601323 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62f950f6_2a18_4ca6_8cdb_75f47437053a.slice/crio-fc52324de537b12f15e20ce968015c137d45ab24e843e12eeba34172ad28a9ec WatchSource:0}: Error finding container fc52324de537b12f15e20ce968015c137d45ab24e843e12eeba34172ad28a9ec: Status 404 returned error can't find the container with id fc52324de537b12f15e20ce968015c137d45ab24e843e12eeba34172ad28a9ec Jan 31 09:10:52 crc kubenswrapper[4732]: I0131 09:10:52.317121 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" event={"ID":"62f950f6-2a18-4ca6-8cdb-75f47437053a","Type":"ContainerStarted","Data":"fc52324de537b12f15e20ce968015c137d45ab24e843e12eeba34172ad28a9ec"} Jan 31 09:10:56 crc kubenswrapper[4732]: I0131 09:10:56.341781 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" event={"ID":"62f950f6-2a18-4ca6-8cdb-75f47437053a","Type":"ContainerStarted","Data":"309e650ae87aac0d5de670a1b9e803ac09384be7cbc1b77ae2aa2c5a6f6c2a7a"} Jan 31 09:10:56 crc kubenswrapper[4732]: I0131 09:10:56.342212 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" Jan 31 09:10:56 crc kubenswrapper[4732]: I0131 09:10:56.343144 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" event={"ID":"8ca218dd-0d42-45c8-b4e4-ca638781c915","Type":"ContainerStarted","Data":"bce5120da08ada6151297c322ec5c219501f9d3a4bdafdfe8e7be38cf56b0e5a"} Jan 31 09:10:56 crc kubenswrapper[4732]: I0131 09:10:56.343500 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" Jan 31 09:10:56 crc kubenswrapper[4732]: I0131 09:10:56.373336 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" podStartSLOduration=1.551037713 podStartE2EDuration="5.37331397s" podCreationTimestamp="2026-01-31 09:10:51 +0000 UTC" firstStartedPulling="2026-01-31 09:10:51.605829627 +0000 UTC m=+589.911705831" lastFinishedPulling="2026-01-31 09:10:55.428105884 +0000 UTC m=+593.733982088" observedRunningTime="2026-01-31 09:10:56.368053831 +0000 UTC m=+594.673930065" watchObservedRunningTime="2026-01-31 09:10:56.37331397 +0000 UTC m=+594.679190184" Jan 31 09:10:56 crc kubenswrapper[4732]: I0131 09:10:56.389538 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" podStartSLOduration=2.355171877 podStartE2EDuration="6.389513378s" podCreationTimestamp="2026-01-31 09:10:50 +0000 UTC" firstStartedPulling="2026-01-31 09:10:51.299471417 +0000 UTC m=+589.605347621" lastFinishedPulling="2026-01-31 09:10:55.333812918 +0000 UTC m=+593.639689122" observedRunningTime="2026-01-31 09:10:56.386686257 +0000 UTC m=+594.692562471" watchObservedRunningTime="2026-01-31 09:10:56.389513378 +0000 UTC m=+594.695389582" Jan 31 09:11:02 crc kubenswrapper[4732]: I0131 09:11:02.905831 4732 scope.go:117] "RemoveContainer" containerID="456969ec8d447fd7f9acd1803b222ebfb16b02c8dee1959dd936eec29aa1d617" Jan 31 09:11:03 crc kubenswrapper[4732]: I0131 09:11:03.377238 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4mxsr_8e23192f-14db-41ef-af89-4a76e325d9c1/kube-multus/2.log" Jan 31 09:11:11 crc kubenswrapper[4732]: I0131 09:11:11.360526 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-c98699f55-6bzdf" Jan 31 09:11:17 crc kubenswrapper[4732]: I0131 09:11:17.498132 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:11:17 crc kubenswrapper[4732]: I0131 09:11:17.498660 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:11:17 crc kubenswrapper[4732]: I0131 09:11:17.498725 4732 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:11:17 crc kubenswrapper[4732]: I0131 09:11:17.499288 4732 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"942a11834ff55816d19ec94b72706370701e25dcee37029bb97b73b2e3078f9b"} pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:11:17 crc kubenswrapper[4732]: I0131 09:11:17.499350 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" containerID="cri-o://942a11834ff55816d19ec94b72706370701e25dcee37029bb97b73b2e3078f9b" gracePeriod=600 Jan 31 09:11:18 crc kubenswrapper[4732]: I0131 09:11:18.457208 4732 generic.go:334] "Generic (PLEG): container finished" podID="7d790207-d357-4b47-87bf-5b505e061820" containerID="942a11834ff55816d19ec94b72706370701e25dcee37029bb97b73b2e3078f9b" exitCode=0 Jan 31 09:11:18 crc kubenswrapper[4732]: I0131 09:11:18.457585 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerDied","Data":"942a11834ff55816d19ec94b72706370701e25dcee37029bb97b73b2e3078f9b"} Jan 31 09:11:18 crc kubenswrapper[4732]: I0131 09:11:18.457626 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerStarted","Data":"e8d3fd1eb561cfd678a2a0df1de54d984c12dc8e05f74e816b693d4b18b74a20"} Jan 31 09:11:18 crc kubenswrapper[4732]: I0131 09:11:18.457652 4732 scope.go:117] "RemoveContainer" containerID="1a1af67f6e9c90030eed50fdab77c62259e76a7813864bb504390768e9501756" Jan 31 09:11:30 crc kubenswrapper[4732]: I0131 09:11:30.960317 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6d8dc66c8b-8p2mn" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.634531 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-6xvqw"] Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.637260 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.640061 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vsvzk" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.640263 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.640385 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.644260 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt"] Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.645085 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.647929 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt"] Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.656864 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.716624 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-gcmq2"] Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.717482 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gcmq2" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.719974 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.720337 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.720906 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.722162 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-qlvvw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.748010 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-jq8g8"] Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.749369 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.752913 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.762192 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-jq8g8"] Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.764631 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/66e27417-1fb4-4ca9-b104-d3d335370f0d-frr-conf\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.764725 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/66e27417-1fb4-4ca9-b104-d3d335370f0d-frr-sockets\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.764760 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e27417-1fb4-4ca9-b104-d3d335370f0d-metrics-certs\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.764863 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4b09b4ac-95c1-4c31-99a0-12b38c3412ae-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5l2kt\" (UID: \"4b09b4ac-95c1-4c31-99a0-12b38c3412ae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.764919 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/66e27417-1fb4-4ca9-b104-d3d335370f0d-reloader\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.764945 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln9xj\" (UniqueName: \"kubernetes.io/projected/66e27417-1fb4-4ca9-b104-d3d335370f0d-kube-api-access-ln9xj\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.764987 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/66e27417-1fb4-4ca9-b104-d3d335370f0d-frr-startup\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.765006 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/66e27417-1fb4-4ca9-b104-d3d335370f0d-metrics\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.765022 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57sff\" (UniqueName: \"kubernetes.io/projected/4b09b4ac-95c1-4c31-99a0-12b38c3412ae-kube-api-access-57sff\") pod \"frr-k8s-webhook-server-7df86c4f6c-5l2kt\" (UID: \"4b09b4ac-95c1-4c31-99a0-12b38c3412ae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.866723 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-metrics-certs\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.866792 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4b09b4ac-95c1-4c31-99a0-12b38c3412ae-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5l2kt\" (UID: \"4b09b4ac-95c1-4c31-99a0-12b38c3412ae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.866823 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-metallb-excludel2\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.866853 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-memberlist\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.866878 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/66e27417-1fb4-4ca9-b104-d3d335370f0d-reloader\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.866902 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53b5272f-ac5c-4616-a427-28fc830d7392-cert\") pod \"controller-6968d8fdc4-jq8g8\" (UID: \"53b5272f-ac5c-4616-a427-28fc830d7392\") " pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.866930 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln9xj\" (UniqueName: \"kubernetes.io/projected/66e27417-1fb4-4ca9-b104-d3d335370f0d-kube-api-access-ln9xj\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.866968 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57sff\" (UniqueName: \"kubernetes.io/projected/4b09b4ac-95c1-4c31-99a0-12b38c3412ae-kube-api-access-57sff\") pod \"frr-k8s-webhook-server-7df86c4f6c-5l2kt\" (UID: \"4b09b4ac-95c1-4c31-99a0-12b38c3412ae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.867004 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/66e27417-1fb4-4ca9-b104-d3d335370f0d-frr-startup\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.867024 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/66e27417-1fb4-4ca9-b104-d3d335370f0d-metrics\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.867059 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53b5272f-ac5c-4616-a427-28fc830d7392-metrics-certs\") pod \"controller-6968d8fdc4-jq8g8\" (UID: \"53b5272f-ac5c-4616-a427-28fc830d7392\") " pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.867085 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/66e27417-1fb4-4ca9-b104-d3d335370f0d-frr-conf\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.867115 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8wlp\" (UniqueName: \"kubernetes.io/projected/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-kube-api-access-c8wlp\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.867145 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58snc\" (UniqueName: \"kubernetes.io/projected/53b5272f-ac5c-4616-a427-28fc830d7392-kube-api-access-58snc\") pod \"controller-6968d8fdc4-jq8g8\" (UID: \"53b5272f-ac5c-4616-a427-28fc830d7392\") " pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.867171 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/66e27417-1fb4-4ca9-b104-d3d335370f0d-frr-sockets\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.867200 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e27417-1fb4-4ca9-b104-d3d335370f0d-metrics-certs\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: E0131 09:11:31.867930 4732 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 31 09:11:31 crc kubenswrapper[4732]: E0131 09:11:31.868103 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b09b4ac-95c1-4c31-99a0-12b38c3412ae-cert podName:4b09b4ac-95c1-4c31-99a0-12b38c3412ae nodeName:}" failed. No retries permitted until 2026-01-31 09:11:32.368080103 +0000 UTC m=+630.673956397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4b09b4ac-95c1-4c31-99a0-12b38c3412ae-cert") pod "frr-k8s-webhook-server-7df86c4f6c-5l2kt" (UID: "4b09b4ac-95c1-4c31-99a0-12b38c3412ae") : secret "frr-k8s-webhook-server-cert" not found Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.869132 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/66e27417-1fb4-4ca9-b104-d3d335370f0d-metrics\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.869152 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/66e27417-1fb4-4ca9-b104-d3d335370f0d-frr-sockets\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.869346 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/66e27417-1fb4-4ca9-b104-d3d335370f0d-frr-conf\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.869595 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/66e27417-1fb4-4ca9-b104-d3d335370f0d-reloader\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.869921 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/66e27417-1fb4-4ca9-b104-d3d335370f0d-frr-startup\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.887921 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e27417-1fb4-4ca9-b104-d3d335370f0d-metrics-certs\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.890802 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln9xj\" (UniqueName: \"kubernetes.io/projected/66e27417-1fb4-4ca9-b104-d3d335370f0d-kube-api-access-ln9xj\") pod \"frr-k8s-6xvqw\" (UID: \"66e27417-1fb4-4ca9-b104-d3d335370f0d\") " pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.891699 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57sff\" (UniqueName: \"kubernetes.io/projected/4b09b4ac-95c1-4c31-99a0-12b38c3412ae-kube-api-access-57sff\") pod \"frr-k8s-webhook-server-7df86c4f6c-5l2kt\" (UID: \"4b09b4ac-95c1-4c31-99a0-12b38c3412ae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.961524 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.968719 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53b5272f-ac5c-4616-a427-28fc830d7392-metrics-certs\") pod \"controller-6968d8fdc4-jq8g8\" (UID: \"53b5272f-ac5c-4616-a427-28fc830d7392\") " pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.968767 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8wlp\" (UniqueName: \"kubernetes.io/projected/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-kube-api-access-c8wlp\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.968795 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58snc\" (UniqueName: \"kubernetes.io/projected/53b5272f-ac5c-4616-a427-28fc830d7392-kube-api-access-58snc\") pod \"controller-6968d8fdc4-jq8g8\" (UID: \"53b5272f-ac5c-4616-a427-28fc830d7392\") " pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.968843 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-metrics-certs\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.968878 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-metallb-excludel2\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.968897 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-memberlist\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.968914 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53b5272f-ac5c-4616-a427-28fc830d7392-cert\") pod \"controller-6968d8fdc4-jq8g8\" (UID: \"53b5272f-ac5c-4616-a427-28fc830d7392\") " pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:31 crc kubenswrapper[4732]: E0131 09:11:31.969540 4732 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 09:11:31 crc kubenswrapper[4732]: E0131 09:11:31.969842 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-memberlist podName:3fbb0c82-6b72-4313-94e2-3e71d27cf75f nodeName:}" failed. No retries permitted until 2026-01-31 09:11:32.469809507 +0000 UTC m=+630.775685711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-memberlist") pod "speaker-gcmq2" (UID: "3fbb0c82-6b72-4313-94e2-3e71d27cf75f") : secret "metallb-memberlist" not found Jan 31 09:11:31 crc kubenswrapper[4732]: E0131 09:11:31.969927 4732 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 31 09:11:31 crc kubenswrapper[4732]: E0131 09:11:31.970123 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53b5272f-ac5c-4616-a427-28fc830d7392-metrics-certs podName:53b5272f-ac5c-4616-a427-28fc830d7392 nodeName:}" failed. No retries permitted until 2026-01-31 09:11:32.470108837 +0000 UTC m=+630.775985121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53b5272f-ac5c-4616-a427-28fc830d7392-metrics-certs") pod "controller-6968d8fdc4-jq8g8" (UID: "53b5272f-ac5c-4616-a427-28fc830d7392") : secret "controller-certs-secret" not found Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.970272 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-metallb-excludel2\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.972018 4732 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.973555 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-metrics-certs\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.987841 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53b5272f-ac5c-4616-a427-28fc830d7392-cert\") pod \"controller-6968d8fdc4-jq8g8\" (UID: \"53b5272f-ac5c-4616-a427-28fc830d7392\") " pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.990804 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8wlp\" (UniqueName: \"kubernetes.io/projected/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-kube-api-access-c8wlp\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:31 crc kubenswrapper[4732]: I0131 09:11:31.992425 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58snc\" (UniqueName: \"kubernetes.io/projected/53b5272f-ac5c-4616-a427-28fc830d7392-kube-api-access-58snc\") pod \"controller-6968d8fdc4-jq8g8\" (UID: \"53b5272f-ac5c-4616-a427-28fc830d7392\") " pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:32 crc kubenswrapper[4732]: I0131 09:11:32.373974 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4b09b4ac-95c1-4c31-99a0-12b38c3412ae-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5l2kt\" (UID: \"4b09b4ac-95c1-4c31-99a0-12b38c3412ae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" Jan 31 09:11:32 crc kubenswrapper[4732]: I0131 09:11:32.383528 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4b09b4ac-95c1-4c31-99a0-12b38c3412ae-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5l2kt\" (UID: \"4b09b4ac-95c1-4c31-99a0-12b38c3412ae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" Jan 31 09:11:32 crc kubenswrapper[4732]: I0131 09:11:32.476149 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-memberlist\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:32 crc kubenswrapper[4732]: I0131 09:11:32.476246 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53b5272f-ac5c-4616-a427-28fc830d7392-metrics-certs\") pod \"controller-6968d8fdc4-jq8g8\" (UID: \"53b5272f-ac5c-4616-a427-28fc830d7392\") " pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:32 crc kubenswrapper[4732]: E0131 09:11:32.476313 4732 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 09:11:32 crc kubenswrapper[4732]: E0131 09:11:32.476397 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-memberlist podName:3fbb0c82-6b72-4313-94e2-3e71d27cf75f nodeName:}" failed. No retries permitted until 2026-01-31 09:11:33.476376451 +0000 UTC m=+631.782252665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-memberlist") pod "speaker-gcmq2" (UID: "3fbb0c82-6b72-4313-94e2-3e71d27cf75f") : secret "metallb-memberlist" not found Jan 31 09:11:32 crc kubenswrapper[4732]: I0131 09:11:32.480840 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53b5272f-ac5c-4616-a427-28fc830d7392-metrics-certs\") pod \"controller-6968d8fdc4-jq8g8\" (UID: \"53b5272f-ac5c-4616-a427-28fc830d7392\") " pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:32 crc kubenswrapper[4732]: I0131 09:11:32.528292 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6xvqw" event={"ID":"66e27417-1fb4-4ca9-b104-d3d335370f0d","Type":"ContainerStarted","Data":"877b0278a4b6e1b30b2daf925c6dcdeea13d341f02b35472839810c10daea2cd"} Jan 31 09:11:32 crc kubenswrapper[4732]: I0131 09:11:32.569781 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" Jan 31 09:11:32 crc kubenswrapper[4732]: I0131 09:11:32.665266 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:32 crc kubenswrapper[4732]: I0131 09:11:32.845201 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-jq8g8"] Jan 31 09:11:32 crc kubenswrapper[4732]: W0131 09:11:32.857338 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53b5272f_ac5c_4616_a427_28fc830d7392.slice/crio-9aba267b95b4b970dccb5a2b68ba16102c31510aaf08510acef1b3802eb834b1 WatchSource:0}: Error finding container 9aba267b95b4b970dccb5a2b68ba16102c31510aaf08510acef1b3802eb834b1: Status 404 returned error can't find the container with id 9aba267b95b4b970dccb5a2b68ba16102c31510aaf08510acef1b3802eb834b1 Jan 31 09:11:32 crc kubenswrapper[4732]: I0131 09:11:32.987748 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt"] Jan 31 09:11:33 crc kubenswrapper[4732]: I0131 09:11:33.487842 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-memberlist\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:33 crc kubenswrapper[4732]: I0131 09:11:33.508263 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3fbb0c82-6b72-4313-94e2-3e71d27cf75f-memberlist\") pod \"speaker-gcmq2\" (UID: \"3fbb0c82-6b72-4313-94e2-3e71d27cf75f\") " pod="metallb-system/speaker-gcmq2" Jan 31 09:11:33 crc kubenswrapper[4732]: I0131 09:11:33.532217 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gcmq2" Jan 31 09:11:33 crc kubenswrapper[4732]: I0131 09:11:33.534820 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-jq8g8" event={"ID":"53b5272f-ac5c-4616-a427-28fc830d7392","Type":"ContainerStarted","Data":"9b64846d479ae0c930962568ce3d86480c7ebfbad126f6da8f0410d10f039d73"} Jan 31 09:11:33 crc kubenswrapper[4732]: I0131 09:11:33.534856 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-jq8g8" event={"ID":"53b5272f-ac5c-4616-a427-28fc830d7392","Type":"ContainerStarted","Data":"9aba267b95b4b970dccb5a2b68ba16102c31510aaf08510acef1b3802eb834b1"} Jan 31 09:11:33 crc kubenswrapper[4732]: I0131 09:11:33.538922 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" event={"ID":"4b09b4ac-95c1-4c31-99a0-12b38c3412ae","Type":"ContainerStarted","Data":"664da7da90ef74957fafe8fc535de0f58058c92cc3d30fbbb8064763faa14df3"} Jan 31 09:11:33 crc kubenswrapper[4732]: W0131 09:11:33.572250 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fbb0c82_6b72_4313_94e2_3e71d27cf75f.slice/crio-43bfef5c0d5360e9f8ca58a5e973bf4c974077f1d5a0239fb1fe38a2f1214a36 WatchSource:0}: Error finding container 43bfef5c0d5360e9f8ca58a5e973bf4c974077f1d5a0239fb1fe38a2f1214a36: Status 404 returned error can't find the container with id 43bfef5c0d5360e9f8ca58a5e973bf4c974077f1d5a0239fb1fe38a2f1214a36 Jan 31 09:11:34 crc kubenswrapper[4732]: I0131 09:11:34.557732 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gcmq2" event={"ID":"3fbb0c82-6b72-4313-94e2-3e71d27cf75f","Type":"ContainerStarted","Data":"24175cffda77a64be4617731ed7ada49bd086d0d60b0c4fb51dd9840bc67cd32"} Jan 31 09:11:34 crc kubenswrapper[4732]: I0131 09:11:34.558069 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gcmq2" event={"ID":"3fbb0c82-6b72-4313-94e2-3e71d27cf75f","Type":"ContainerStarted","Data":"43bfef5c0d5360e9f8ca58a5e973bf4c974077f1d5a0239fb1fe38a2f1214a36"} Jan 31 09:11:37 crc kubenswrapper[4732]: I0131 09:11:37.576476 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gcmq2" event={"ID":"3fbb0c82-6b72-4313-94e2-3e71d27cf75f","Type":"ContainerStarted","Data":"85398a4d05a049926254268f96773bb7ec09fbba31a8f09acd0c7e2e6186e5a5"} Jan 31 09:11:37 crc kubenswrapper[4732]: I0131 09:11:37.578008 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-gcmq2" Jan 31 09:11:37 crc kubenswrapper[4732]: I0131 09:11:37.581597 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-jq8g8" event={"ID":"53b5272f-ac5c-4616-a427-28fc830d7392","Type":"ContainerStarted","Data":"ce64b6957d9463469d5ae7f256979aae906fbdcc5f67db85e42734c490543530"} Jan 31 09:11:37 crc kubenswrapper[4732]: I0131 09:11:37.582332 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:37 crc kubenswrapper[4732]: I0131 09:11:37.603381 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-gcmq2" podStartSLOduration=3.76279723 podStartE2EDuration="6.603355935s" podCreationTimestamp="2026-01-31 09:11:31 +0000 UTC" firstStartedPulling="2026-01-31 09:11:33.828208324 +0000 UTC m=+632.134084528" lastFinishedPulling="2026-01-31 09:11:36.668767029 +0000 UTC m=+634.974643233" observedRunningTime="2026-01-31 09:11:37.598226051 +0000 UTC m=+635.904102295" watchObservedRunningTime="2026-01-31 09:11:37.603355935 +0000 UTC m=+635.909232139" Jan 31 09:11:37 crc kubenswrapper[4732]: I0131 09:11:37.621103 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-jq8g8" podStartSLOduration=2.975426903 podStartE2EDuration="6.621088863s" podCreationTimestamp="2026-01-31 09:11:31 +0000 UTC" firstStartedPulling="2026-01-31 09:11:33.015829437 +0000 UTC m=+631.321705641" lastFinishedPulling="2026-01-31 09:11:36.661491397 +0000 UTC m=+634.967367601" observedRunningTime="2026-01-31 09:11:37.616434134 +0000 UTC m=+635.922310338" watchObservedRunningTime="2026-01-31 09:11:37.621088863 +0000 UTC m=+635.926965067" Jan 31 09:11:40 crc kubenswrapper[4732]: I0131 09:11:40.597603 4732 generic.go:334] "Generic (PLEG): container finished" podID="66e27417-1fb4-4ca9-b104-d3d335370f0d" containerID="17bbc9855a2544b97a95d8f2b0fdd53c711d64083c9abcabe91ce4e910412695" exitCode=0 Jan 31 09:11:40 crc kubenswrapper[4732]: I0131 09:11:40.597716 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6xvqw" event={"ID":"66e27417-1fb4-4ca9-b104-d3d335370f0d","Type":"ContainerDied","Data":"17bbc9855a2544b97a95d8f2b0fdd53c711d64083c9abcabe91ce4e910412695"} Jan 31 09:11:40 crc kubenswrapper[4732]: I0131 09:11:40.599753 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" event={"ID":"4b09b4ac-95c1-4c31-99a0-12b38c3412ae","Type":"ContainerStarted","Data":"ff6b38e11ad90eab008de94627816f6752ead6b47a4056f58a8636e7cbc5d5bb"} Jan 31 09:11:40 crc kubenswrapper[4732]: I0131 09:11:40.599938 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" Jan 31 09:11:40 crc kubenswrapper[4732]: I0131 09:11:40.639653 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" podStartSLOduration=2.9852749579999998 podStartE2EDuration="9.639626381s" podCreationTimestamp="2026-01-31 09:11:31 +0000 UTC" firstStartedPulling="2026-01-31 09:11:32.99684151 +0000 UTC m=+631.302717734" lastFinishedPulling="2026-01-31 09:11:39.651192953 +0000 UTC m=+637.957069157" observedRunningTime="2026-01-31 09:11:40.636492561 +0000 UTC m=+638.942368785" watchObservedRunningTime="2026-01-31 09:11:40.639626381 +0000 UTC m=+638.945502625" Jan 31 09:11:41 crc kubenswrapper[4732]: I0131 09:11:41.608966 4732 generic.go:334] "Generic (PLEG): container finished" podID="66e27417-1fb4-4ca9-b104-d3d335370f0d" containerID="f6b1222c50e8dc0c520cfd01300316178bbf984c000355ff97ab109558e3e51b" exitCode=0 Jan 31 09:11:41 crc kubenswrapper[4732]: I0131 09:11:41.609035 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6xvqw" event={"ID":"66e27417-1fb4-4ca9-b104-d3d335370f0d","Type":"ContainerDied","Data":"f6b1222c50e8dc0c520cfd01300316178bbf984c000355ff97ab109558e3e51b"} Jan 31 09:11:42 crc kubenswrapper[4732]: I0131 09:11:42.618367 4732 generic.go:334] "Generic (PLEG): container finished" podID="66e27417-1fb4-4ca9-b104-d3d335370f0d" containerID="d53fa6cf9963c28e888cd5cc8e0ae0167cc35089e4775e6e47dd17b931d7e185" exitCode=0 Jan 31 09:11:42 crc kubenswrapper[4732]: I0131 09:11:42.618468 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6xvqw" event={"ID":"66e27417-1fb4-4ca9-b104-d3d335370f0d","Type":"ContainerDied","Data":"d53fa6cf9963c28e888cd5cc8e0ae0167cc35089e4775e6e47dd17b931d7e185"} Jan 31 09:11:43 crc kubenswrapper[4732]: I0131 09:11:43.535927 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-gcmq2" Jan 31 09:11:43 crc kubenswrapper[4732]: I0131 09:11:43.628271 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6xvqw" event={"ID":"66e27417-1fb4-4ca9-b104-d3d335370f0d","Type":"ContainerStarted","Data":"8de5087c7410064cf47cb82bff42f314320a14609cbf14becd65f8b953e1d6cb"} Jan 31 09:11:43 crc kubenswrapper[4732]: I0131 09:11:43.628307 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6xvqw" event={"ID":"66e27417-1fb4-4ca9-b104-d3d335370f0d","Type":"ContainerStarted","Data":"cf031ce87d3a9e971494476229ce8e2facfdb4482fa5e2ba8680ee58a83243c6"} Jan 31 09:11:43 crc kubenswrapper[4732]: I0131 09:11:43.628318 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6xvqw" event={"ID":"66e27417-1fb4-4ca9-b104-d3d335370f0d","Type":"ContainerStarted","Data":"ada6df0ddf7708e4acce2218c2f10c69c07e46a95fe1694346941044582d8ced"} Jan 31 09:11:43 crc kubenswrapper[4732]: I0131 09:11:43.628326 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6xvqw" event={"ID":"66e27417-1fb4-4ca9-b104-d3d335370f0d","Type":"ContainerStarted","Data":"fa4c2e2585a2e15ab8246d8bd459938c11f8cd61eed1485893cfcf919a42705b"} Jan 31 09:11:43 crc kubenswrapper[4732]: I0131 09:11:43.628333 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6xvqw" event={"ID":"66e27417-1fb4-4ca9-b104-d3d335370f0d","Type":"ContainerStarted","Data":"2f01cccd3bb714c4daceba533455375c146aa0f9592138bcc1240b752970604a"} Jan 31 09:11:43 crc kubenswrapper[4732]: I0131 09:11:43.628341 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6xvqw" event={"ID":"66e27417-1fb4-4ca9-b104-d3d335370f0d","Type":"ContainerStarted","Data":"19352b10d242364a4618e968d8a1321ec35b8a7b8c4123b00617e9ecf000c3db"} Jan 31 09:11:43 crc kubenswrapper[4732]: I0131 09:11:43.628438 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:43 crc kubenswrapper[4732]: I0131 09:11:43.657624 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-6xvqw" podStartSLOduration=5.150623854 podStartE2EDuration="12.657607491s" podCreationTimestamp="2026-01-31 09:11:31 +0000 UTC" firstStartedPulling="2026-01-31 09:11:32.126798019 +0000 UTC m=+630.432674213" lastFinishedPulling="2026-01-31 09:11:39.633781646 +0000 UTC m=+637.939657850" observedRunningTime="2026-01-31 09:11:43.656152644 +0000 UTC m=+641.962028908" watchObservedRunningTime="2026-01-31 09:11:43.657607491 +0000 UTC m=+641.963483705" Jan 31 09:11:46 crc kubenswrapper[4732]: I0131 09:11:46.962733 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:47 crc kubenswrapper[4732]: I0131 09:11:47.034498 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:11:49 crc kubenswrapper[4732]: I0131 09:11:49.220954 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-rdb96"] Jan 31 09:11:49 crc kubenswrapper[4732]: I0131 09:11:49.221635 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-rdb96" Jan 31 09:11:49 crc kubenswrapper[4732]: I0131 09:11:49.224965 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-7wfqk" Jan 31 09:11:49 crc kubenswrapper[4732]: I0131 09:11:49.225387 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 09:11:49 crc kubenswrapper[4732]: I0131 09:11:49.226587 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 09:11:49 crc kubenswrapper[4732]: I0131 09:11:49.248997 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-rdb96"] Jan 31 09:11:49 crc kubenswrapper[4732]: I0131 09:11:49.315753 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn9b6\" (UniqueName: \"kubernetes.io/projected/58ea6948-d466-40a4-8953-23c4043c1f38-kube-api-access-qn9b6\") pod \"mariadb-operator-index-rdb96\" (UID: \"58ea6948-d466-40a4-8953-23c4043c1f38\") " pod="openstack-operators/mariadb-operator-index-rdb96" Jan 31 09:11:49 crc kubenswrapper[4732]: I0131 09:11:49.417184 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn9b6\" (UniqueName: \"kubernetes.io/projected/58ea6948-d466-40a4-8953-23c4043c1f38-kube-api-access-qn9b6\") pod \"mariadb-operator-index-rdb96\" (UID: \"58ea6948-d466-40a4-8953-23c4043c1f38\") " pod="openstack-operators/mariadb-operator-index-rdb96" Jan 31 09:11:49 crc kubenswrapper[4732]: I0131 09:11:49.439619 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn9b6\" (UniqueName: \"kubernetes.io/projected/58ea6948-d466-40a4-8953-23c4043c1f38-kube-api-access-qn9b6\") pod \"mariadb-operator-index-rdb96\" (UID: \"58ea6948-d466-40a4-8953-23c4043c1f38\") " pod="openstack-operators/mariadb-operator-index-rdb96" Jan 31 09:11:49 crc kubenswrapper[4732]: I0131 09:11:49.543823 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-rdb96" Jan 31 09:11:49 crc kubenswrapper[4732]: I0131 09:11:49.974476 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-rdb96"] Jan 31 09:11:49 crc kubenswrapper[4732]: W0131 09:11:49.982762 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58ea6948_d466_40a4_8953_23c4043c1f38.slice/crio-01973f9f12fcaf815714cf7f987e491baba8870f0389c2831a780a93f9a000ee WatchSource:0}: Error finding container 01973f9f12fcaf815714cf7f987e491baba8870f0389c2831a780a93f9a000ee: Status 404 returned error can't find the container with id 01973f9f12fcaf815714cf7f987e491baba8870f0389c2831a780a93f9a000ee Jan 31 09:11:50 crc kubenswrapper[4732]: I0131 09:11:50.677371 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-rdb96" event={"ID":"58ea6948-d466-40a4-8953-23c4043c1f38","Type":"ContainerStarted","Data":"01973f9f12fcaf815714cf7f987e491baba8870f0389c2831a780a93f9a000ee"} Jan 31 09:11:51 crc kubenswrapper[4732]: I0131 09:11:51.684777 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-rdb96" event={"ID":"58ea6948-d466-40a4-8953-23c4043c1f38","Type":"ContainerStarted","Data":"0b71175788c7ad27d984a970d1b34e5416424e41f381a1e72319c94a101faeb6"} Jan 31 09:11:51 crc kubenswrapper[4732]: I0131 09:11:51.701701 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-rdb96" podStartSLOduration=1.173812748 podStartE2EDuration="2.701632763s" podCreationTimestamp="2026-01-31 09:11:49 +0000 UTC" firstStartedPulling="2026-01-31 09:11:49.984692453 +0000 UTC m=+648.290568657" lastFinishedPulling="2026-01-31 09:11:51.512512418 +0000 UTC m=+649.818388672" observedRunningTime="2026-01-31 09:11:51.700042252 +0000 UTC m=+650.005918466" watchObservedRunningTime="2026-01-31 09:11:51.701632763 +0000 UTC m=+650.007508967" Jan 31 09:11:52 crc kubenswrapper[4732]: I0131 09:11:52.584585 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5l2kt" Jan 31 09:11:52 crc kubenswrapper[4732]: I0131 09:11:52.602876 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-rdb96"] Jan 31 09:11:52 crc kubenswrapper[4732]: I0131 09:11:52.672391 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-jq8g8" Jan 31 09:11:53 crc kubenswrapper[4732]: I0131 09:11:53.206735 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-b545g"] Jan 31 09:11:53 crc kubenswrapper[4732]: I0131 09:11:53.208255 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-b545g" Jan 31 09:11:53 crc kubenswrapper[4732]: I0131 09:11:53.211133 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-b545g"] Jan 31 09:11:53 crc kubenswrapper[4732]: I0131 09:11:53.266891 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl5c8\" (UniqueName: \"kubernetes.io/projected/e502d767-ed18-4540-8d2c-ffd993e4822d-kube-api-access-jl5c8\") pod \"mariadb-operator-index-b545g\" (UID: \"e502d767-ed18-4540-8d2c-ffd993e4822d\") " pod="openstack-operators/mariadb-operator-index-b545g" Jan 31 09:11:53 crc kubenswrapper[4732]: I0131 09:11:53.368357 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl5c8\" (UniqueName: \"kubernetes.io/projected/e502d767-ed18-4540-8d2c-ffd993e4822d-kube-api-access-jl5c8\") pod \"mariadb-operator-index-b545g\" (UID: \"e502d767-ed18-4540-8d2c-ffd993e4822d\") " pod="openstack-operators/mariadb-operator-index-b545g" Jan 31 09:11:53 crc kubenswrapper[4732]: I0131 09:11:53.385355 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl5c8\" (UniqueName: \"kubernetes.io/projected/e502d767-ed18-4540-8d2c-ffd993e4822d-kube-api-access-jl5c8\") pod \"mariadb-operator-index-b545g\" (UID: \"e502d767-ed18-4540-8d2c-ffd993e4822d\") " pod="openstack-operators/mariadb-operator-index-b545g" Jan 31 09:11:53 crc kubenswrapper[4732]: I0131 09:11:53.576183 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-b545g" Jan 31 09:11:53 crc kubenswrapper[4732]: I0131 09:11:53.699493 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-rdb96" podUID="58ea6948-d466-40a4-8953-23c4043c1f38" containerName="registry-server" containerID="cri-o://0b71175788c7ad27d984a970d1b34e5416424e41f381a1e72319c94a101faeb6" gracePeriod=2 Jan 31 09:11:54 crc kubenswrapper[4732]: I0131 09:11:54.001731 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-b545g"] Jan 31 09:11:54 crc kubenswrapper[4732]: I0131 09:11:54.710021 4732 generic.go:334] "Generic (PLEG): container finished" podID="58ea6948-d466-40a4-8953-23c4043c1f38" containerID="0b71175788c7ad27d984a970d1b34e5416424e41f381a1e72319c94a101faeb6" exitCode=0 Jan 31 09:11:54 crc kubenswrapper[4732]: I0131 09:11:54.710113 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-rdb96" event={"ID":"58ea6948-d466-40a4-8953-23c4043c1f38","Type":"ContainerDied","Data":"0b71175788c7ad27d984a970d1b34e5416424e41f381a1e72319c94a101faeb6"} Jan 31 09:11:54 crc kubenswrapper[4732]: I0131 09:11:54.711684 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-b545g" event={"ID":"e502d767-ed18-4540-8d2c-ffd993e4822d","Type":"ContainerStarted","Data":"e336ec36e7b92b98ffa9cd023af81299e0d2a21836b58f3096392afd27981f20"} Jan 31 09:11:54 crc kubenswrapper[4732]: I0131 09:11:54.937344 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-rdb96" Jan 31 09:11:54 crc kubenswrapper[4732]: I0131 09:11:54.990092 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn9b6\" (UniqueName: \"kubernetes.io/projected/58ea6948-d466-40a4-8953-23c4043c1f38-kube-api-access-qn9b6\") pod \"58ea6948-d466-40a4-8953-23c4043c1f38\" (UID: \"58ea6948-d466-40a4-8953-23c4043c1f38\") " Jan 31 09:11:54 crc kubenswrapper[4732]: I0131 09:11:54.996122 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ea6948-d466-40a4-8953-23c4043c1f38-kube-api-access-qn9b6" (OuterVolumeSpecName: "kube-api-access-qn9b6") pod "58ea6948-d466-40a4-8953-23c4043c1f38" (UID: "58ea6948-d466-40a4-8953-23c4043c1f38"). InnerVolumeSpecName "kube-api-access-qn9b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:11:55 crc kubenswrapper[4732]: I0131 09:11:55.091737 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn9b6\" (UniqueName: \"kubernetes.io/projected/58ea6948-d466-40a4-8953-23c4043c1f38-kube-api-access-qn9b6\") on node \"crc\" DevicePath \"\"" Jan 31 09:11:55 crc kubenswrapper[4732]: I0131 09:11:55.723494 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-rdb96" event={"ID":"58ea6948-d466-40a4-8953-23c4043c1f38","Type":"ContainerDied","Data":"01973f9f12fcaf815714cf7f987e491baba8870f0389c2831a780a93f9a000ee"} Jan 31 09:11:55 crc kubenswrapper[4732]: I0131 09:11:55.723565 4732 scope.go:117] "RemoveContainer" containerID="0b71175788c7ad27d984a970d1b34e5416424e41f381a1e72319c94a101faeb6" Jan 31 09:11:55 crc kubenswrapper[4732]: I0131 09:11:55.723573 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-rdb96" Jan 31 09:11:55 crc kubenswrapper[4732]: I0131 09:11:55.729430 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-b545g" event={"ID":"e502d767-ed18-4540-8d2c-ffd993e4822d","Type":"ContainerStarted","Data":"e0a2d107fb0ab5da52d563b9ed97ddc9f7ce9c51dfb332362a03b85f4c2acfe7"} Jan 31 09:11:55 crc kubenswrapper[4732]: I0131 09:11:55.766296 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-b545g" podStartSLOduration=2.081128973 podStartE2EDuration="2.766273931s" podCreationTimestamp="2026-01-31 09:11:53 +0000 UTC" firstStartedPulling="2026-01-31 09:11:54.012191594 +0000 UTC m=+652.318067828" lastFinishedPulling="2026-01-31 09:11:54.697336542 +0000 UTC m=+653.003212786" observedRunningTime="2026-01-31 09:11:55.751336268 +0000 UTC m=+654.057212482" watchObservedRunningTime="2026-01-31 09:11:55.766273931 +0000 UTC m=+654.072150145" Jan 31 09:11:55 crc kubenswrapper[4732]: I0131 09:11:55.771008 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-rdb96"] Jan 31 09:11:55 crc kubenswrapper[4732]: I0131 09:11:55.774311 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-rdb96"] Jan 31 09:11:56 crc kubenswrapper[4732]: I0131 09:11:56.550135 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ea6948-d466-40a4-8953-23c4043c1f38" path="/var/lib/kubelet/pods/58ea6948-d466-40a4-8953-23c4043c1f38/volumes" Jan 31 09:12:01 crc kubenswrapper[4732]: I0131 09:12:01.964957 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-6xvqw" Jan 31 09:12:03 crc kubenswrapper[4732]: I0131 09:12:03.577292 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-b545g" Jan 31 09:12:03 crc kubenswrapper[4732]: I0131 09:12:03.577778 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-b545g" Jan 31 09:12:03 crc kubenswrapper[4732]: I0131 09:12:03.615631 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-b545g" Jan 31 09:12:03 crc kubenswrapper[4732]: I0131 09:12:03.814032 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-b545g" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.645680 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664"] Jan 31 09:12:10 crc kubenswrapper[4732]: E0131 09:12:10.646417 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ea6948-d466-40a4-8953-23c4043c1f38" containerName="registry-server" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.646439 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ea6948-d466-40a4-8953-23c4043c1f38" containerName="registry-server" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.646590 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ea6948-d466-40a4-8953-23c4043c1f38" containerName="registry-server" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.648505 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.653315 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tnztr" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.658684 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664"] Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.697852 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25s6h\" (UniqueName: \"kubernetes.io/projected/2996ae7b-aedb-4a67-a98e-b1a466347be0-kube-api-access-25s6h\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664\" (UID: \"2996ae7b-aedb-4a67-a98e-b1a466347be0\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.697922 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2996ae7b-aedb-4a67-a98e-b1a466347be0-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664\" (UID: \"2996ae7b-aedb-4a67-a98e-b1a466347be0\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.698009 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2996ae7b-aedb-4a67-a98e-b1a466347be0-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664\" (UID: \"2996ae7b-aedb-4a67-a98e-b1a466347be0\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.798975 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2996ae7b-aedb-4a67-a98e-b1a466347be0-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664\" (UID: \"2996ae7b-aedb-4a67-a98e-b1a466347be0\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.799099 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2996ae7b-aedb-4a67-a98e-b1a466347be0-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664\" (UID: \"2996ae7b-aedb-4a67-a98e-b1a466347be0\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.799170 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25s6h\" (UniqueName: \"kubernetes.io/projected/2996ae7b-aedb-4a67-a98e-b1a466347be0-kube-api-access-25s6h\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664\" (UID: \"2996ae7b-aedb-4a67-a98e-b1a466347be0\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.799734 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2996ae7b-aedb-4a67-a98e-b1a466347be0-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664\" (UID: \"2996ae7b-aedb-4a67-a98e-b1a466347be0\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.799776 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2996ae7b-aedb-4a67-a98e-b1a466347be0-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664\" (UID: \"2996ae7b-aedb-4a67-a98e-b1a466347be0\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.824650 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25s6h\" (UniqueName: \"kubernetes.io/projected/2996ae7b-aedb-4a67-a98e-b1a466347be0-kube-api-access-25s6h\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664\" (UID: \"2996ae7b-aedb-4a67-a98e-b1a466347be0\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" Jan 31 09:12:10 crc kubenswrapper[4732]: I0131 09:12:10.971943 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" Jan 31 09:12:11 crc kubenswrapper[4732]: I0131 09:12:11.406112 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664"] Jan 31 09:12:11 crc kubenswrapper[4732]: I0131 09:12:11.859525 4732 generic.go:334] "Generic (PLEG): container finished" podID="2996ae7b-aedb-4a67-a98e-b1a466347be0" containerID="381b993ee77c54308c8a7301a323a8aafc300c3e038f7b063863ce23feb4f43f" exitCode=0 Jan 31 09:12:11 crc kubenswrapper[4732]: I0131 09:12:11.859572 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" event={"ID":"2996ae7b-aedb-4a67-a98e-b1a466347be0","Type":"ContainerDied","Data":"381b993ee77c54308c8a7301a323a8aafc300c3e038f7b063863ce23feb4f43f"} Jan 31 09:12:11 crc kubenswrapper[4732]: I0131 09:12:11.859599 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" event={"ID":"2996ae7b-aedb-4a67-a98e-b1a466347be0","Type":"ContainerStarted","Data":"2ea7265b66893cb5dccecbcec37675792afbaf1dad165c06f341da7a7262a932"} Jan 31 09:12:12 crc kubenswrapper[4732]: I0131 09:12:12.871414 4732 generic.go:334] "Generic (PLEG): container finished" podID="2996ae7b-aedb-4a67-a98e-b1a466347be0" containerID="4b3511c94c5ac6d2254541268344c305851e16ace80321bcfccadaad9af571e5" exitCode=0 Jan 31 09:12:12 crc kubenswrapper[4732]: I0131 09:12:12.871462 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" event={"ID":"2996ae7b-aedb-4a67-a98e-b1a466347be0","Type":"ContainerDied","Data":"4b3511c94c5ac6d2254541268344c305851e16ace80321bcfccadaad9af571e5"} Jan 31 09:12:13 crc kubenswrapper[4732]: I0131 09:12:13.880091 4732 generic.go:334] "Generic (PLEG): container finished" podID="2996ae7b-aedb-4a67-a98e-b1a466347be0" containerID="d5192da51c40249f5f8c71160cc71f8db81535a29bd24997e64ba0998bfe2e66" exitCode=0 Jan 31 09:12:13 crc kubenswrapper[4732]: I0131 09:12:13.880137 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" event={"ID":"2996ae7b-aedb-4a67-a98e-b1a466347be0","Type":"ContainerDied","Data":"d5192da51c40249f5f8c71160cc71f8db81535a29bd24997e64ba0998bfe2e66"} Jan 31 09:12:15 crc kubenswrapper[4732]: I0131 09:12:15.119118 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" Jan 31 09:12:15 crc kubenswrapper[4732]: I0131 09:12:15.262632 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2996ae7b-aedb-4a67-a98e-b1a466347be0-bundle\") pod \"2996ae7b-aedb-4a67-a98e-b1a466347be0\" (UID: \"2996ae7b-aedb-4a67-a98e-b1a466347be0\") " Jan 31 09:12:15 crc kubenswrapper[4732]: I0131 09:12:15.262810 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25s6h\" (UniqueName: \"kubernetes.io/projected/2996ae7b-aedb-4a67-a98e-b1a466347be0-kube-api-access-25s6h\") pod \"2996ae7b-aedb-4a67-a98e-b1a466347be0\" (UID: \"2996ae7b-aedb-4a67-a98e-b1a466347be0\") " Jan 31 09:12:15 crc kubenswrapper[4732]: I0131 09:12:15.262896 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2996ae7b-aedb-4a67-a98e-b1a466347be0-util\") pod \"2996ae7b-aedb-4a67-a98e-b1a466347be0\" (UID: \"2996ae7b-aedb-4a67-a98e-b1a466347be0\") " Jan 31 09:12:15 crc kubenswrapper[4732]: I0131 09:12:15.263742 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2996ae7b-aedb-4a67-a98e-b1a466347be0-bundle" (OuterVolumeSpecName: "bundle") pod "2996ae7b-aedb-4a67-a98e-b1a466347be0" (UID: "2996ae7b-aedb-4a67-a98e-b1a466347be0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:12:15 crc kubenswrapper[4732]: I0131 09:12:15.268524 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2996ae7b-aedb-4a67-a98e-b1a466347be0-kube-api-access-25s6h" (OuterVolumeSpecName: "kube-api-access-25s6h") pod "2996ae7b-aedb-4a67-a98e-b1a466347be0" (UID: "2996ae7b-aedb-4a67-a98e-b1a466347be0"). InnerVolumeSpecName "kube-api-access-25s6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:12:15 crc kubenswrapper[4732]: I0131 09:12:15.279923 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2996ae7b-aedb-4a67-a98e-b1a466347be0-util" (OuterVolumeSpecName: "util") pod "2996ae7b-aedb-4a67-a98e-b1a466347be0" (UID: "2996ae7b-aedb-4a67-a98e-b1a466347be0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:12:15 crc kubenswrapper[4732]: I0131 09:12:15.364189 4732 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2996ae7b-aedb-4a67-a98e-b1a466347be0-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:15 crc kubenswrapper[4732]: I0131 09:12:15.364219 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25s6h\" (UniqueName: \"kubernetes.io/projected/2996ae7b-aedb-4a67-a98e-b1a466347be0-kube-api-access-25s6h\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:15 crc kubenswrapper[4732]: I0131 09:12:15.364227 4732 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2996ae7b-aedb-4a67-a98e-b1a466347be0-util\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:15 crc kubenswrapper[4732]: I0131 09:12:15.893484 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" event={"ID":"2996ae7b-aedb-4a67-a98e-b1a466347be0","Type":"ContainerDied","Data":"2ea7265b66893cb5dccecbcec37675792afbaf1dad165c06f341da7a7262a932"} Jan 31 09:12:15 crc kubenswrapper[4732]: I0131 09:12:15.893539 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ea7265b66893cb5dccecbcec37675792afbaf1dad165c06f341da7a7262a932" Jan 31 09:12:15 crc kubenswrapper[4732]: I0131 09:12:15.893555 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664" Jan 31 09:12:23 crc kubenswrapper[4732]: I0131 09:12:23.992744 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq"] Jan 31 09:12:23 crc kubenswrapper[4732]: E0131 09:12:23.993890 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2996ae7b-aedb-4a67-a98e-b1a466347be0" containerName="util" Jan 31 09:12:23 crc kubenswrapper[4732]: I0131 09:12:23.993921 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2996ae7b-aedb-4a67-a98e-b1a466347be0" containerName="util" Jan 31 09:12:23 crc kubenswrapper[4732]: E0131 09:12:23.993946 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2996ae7b-aedb-4a67-a98e-b1a466347be0" containerName="extract" Jan 31 09:12:23 crc kubenswrapper[4732]: I0131 09:12:23.993963 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2996ae7b-aedb-4a67-a98e-b1a466347be0" containerName="extract" Jan 31 09:12:23 crc kubenswrapper[4732]: E0131 09:12:23.994030 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2996ae7b-aedb-4a67-a98e-b1a466347be0" containerName="pull" Jan 31 09:12:23 crc kubenswrapper[4732]: I0131 09:12:23.994053 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2996ae7b-aedb-4a67-a98e-b1a466347be0" containerName="pull" Jan 31 09:12:23 crc kubenswrapper[4732]: I0131 09:12:23.994321 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2996ae7b-aedb-4a67-a98e-b1a466347be0" containerName="extract" Jan 31 09:12:23 crc kubenswrapper[4732]: I0131 09:12:23.995255 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:12:23 crc kubenswrapper[4732]: I0131 09:12:23.997325 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 09:12:23 crc kubenswrapper[4732]: I0131 09:12:23.997577 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-vzqj8" Jan 31 09:12:24 crc kubenswrapper[4732]: I0131 09:12:24.000247 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 31 09:12:24 crc kubenswrapper[4732]: I0131 09:12:24.001532 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq"] Jan 31 09:12:24 crc kubenswrapper[4732]: I0131 09:12:24.178550 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/088a3743-a071-4b0e-9cd8-66271eaeafdb-webhook-cert\") pod \"mariadb-operator-controller-manager-665d897fbd-rcqsq\" (UID: \"088a3743-a071-4b0e-9cd8-66271eaeafdb\") " pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:12:24 crc kubenswrapper[4732]: I0131 09:12:24.178989 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jb9w\" (UniqueName: \"kubernetes.io/projected/088a3743-a071-4b0e-9cd8-66271eaeafdb-kube-api-access-9jb9w\") pod \"mariadb-operator-controller-manager-665d897fbd-rcqsq\" (UID: \"088a3743-a071-4b0e-9cd8-66271eaeafdb\") " pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:12:24 crc kubenswrapper[4732]: I0131 09:12:24.179242 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/088a3743-a071-4b0e-9cd8-66271eaeafdb-apiservice-cert\") pod \"mariadb-operator-controller-manager-665d897fbd-rcqsq\" (UID: \"088a3743-a071-4b0e-9cd8-66271eaeafdb\") " pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:12:24 crc kubenswrapper[4732]: I0131 09:12:24.280032 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jb9w\" (UniqueName: \"kubernetes.io/projected/088a3743-a071-4b0e-9cd8-66271eaeafdb-kube-api-access-9jb9w\") pod \"mariadb-operator-controller-manager-665d897fbd-rcqsq\" (UID: \"088a3743-a071-4b0e-9cd8-66271eaeafdb\") " pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:12:24 crc kubenswrapper[4732]: I0131 09:12:24.280473 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/088a3743-a071-4b0e-9cd8-66271eaeafdb-apiservice-cert\") pod \"mariadb-operator-controller-manager-665d897fbd-rcqsq\" (UID: \"088a3743-a071-4b0e-9cd8-66271eaeafdb\") " pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:12:24 crc kubenswrapper[4732]: I0131 09:12:24.280529 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/088a3743-a071-4b0e-9cd8-66271eaeafdb-webhook-cert\") pod \"mariadb-operator-controller-manager-665d897fbd-rcqsq\" (UID: \"088a3743-a071-4b0e-9cd8-66271eaeafdb\") " pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:12:24 crc kubenswrapper[4732]: I0131 09:12:24.286787 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/088a3743-a071-4b0e-9cd8-66271eaeafdb-webhook-cert\") pod \"mariadb-operator-controller-manager-665d897fbd-rcqsq\" (UID: \"088a3743-a071-4b0e-9cd8-66271eaeafdb\") " pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:12:24 crc kubenswrapper[4732]: I0131 09:12:24.287057 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/088a3743-a071-4b0e-9cd8-66271eaeafdb-apiservice-cert\") pod \"mariadb-operator-controller-manager-665d897fbd-rcqsq\" (UID: \"088a3743-a071-4b0e-9cd8-66271eaeafdb\") " pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:12:24 crc kubenswrapper[4732]: I0131 09:12:24.331427 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jb9w\" (UniqueName: \"kubernetes.io/projected/088a3743-a071-4b0e-9cd8-66271eaeafdb-kube-api-access-9jb9w\") pod \"mariadb-operator-controller-manager-665d897fbd-rcqsq\" (UID: \"088a3743-a071-4b0e-9cd8-66271eaeafdb\") " pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:12:24 crc kubenswrapper[4732]: I0131 09:12:24.615153 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:12:25 crc kubenswrapper[4732]: I0131 09:12:25.012361 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq"] Jan 31 09:12:25 crc kubenswrapper[4732]: W0131 09:12:25.021148 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod088a3743_a071_4b0e_9cd8_66271eaeafdb.slice/crio-afc983ab3b957756283c86e705571f21643e9da88c0bed95193b050d0c42bb03 WatchSource:0}: Error finding container afc983ab3b957756283c86e705571f21643e9da88c0bed95193b050d0c42bb03: Status 404 returned error can't find the container with id afc983ab3b957756283c86e705571f21643e9da88c0bed95193b050d0c42bb03 Jan 31 09:12:25 crc kubenswrapper[4732]: I0131 09:12:25.962487 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" event={"ID":"088a3743-a071-4b0e-9cd8-66271eaeafdb","Type":"ContainerStarted","Data":"afc983ab3b957756283c86e705571f21643e9da88c0bed95193b050d0c42bb03"} Jan 31 09:12:28 crc kubenswrapper[4732]: I0131 09:12:28.980450 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" event={"ID":"088a3743-a071-4b0e-9cd8-66271eaeafdb","Type":"ContainerStarted","Data":"8d293f0ee732a310f5bc8795228ce580921819546f87301342b14a14ba38ea62"} Jan 31 09:12:28 crc kubenswrapper[4732]: I0131 09:12:28.980892 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:12:29 crc kubenswrapper[4732]: I0131 09:12:29.002378 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" podStartSLOduration=2.405245742 podStartE2EDuration="6.002353908s" podCreationTimestamp="2026-01-31 09:12:23 +0000 UTC" firstStartedPulling="2026-01-31 09:12:25.023328958 +0000 UTC m=+683.329205162" lastFinishedPulling="2026-01-31 09:12:28.620437124 +0000 UTC m=+686.926313328" observedRunningTime="2026-01-31 09:12:28.996854574 +0000 UTC m=+687.302730788" watchObservedRunningTime="2026-01-31 09:12:29.002353908 +0000 UTC m=+687.308230132" Jan 31 09:12:34 crc kubenswrapper[4732]: I0131 09:12:34.619591 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:12:37 crc kubenswrapper[4732]: I0131 09:12:37.042066 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-bhhdv"] Jan 31 09:12:37 crc kubenswrapper[4732]: I0131 09:12:37.043120 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-bhhdv" Jan 31 09:12:37 crc kubenswrapper[4732]: I0131 09:12:37.048590 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-dxwnj" Jan 31 09:12:37 crc kubenswrapper[4732]: I0131 09:12:37.068741 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh5dj\" (UniqueName: \"kubernetes.io/projected/5357a934-2f68-4ec3-9ede-f29748dfe8ad-kube-api-access-jh5dj\") pod \"infra-operator-index-bhhdv\" (UID: \"5357a934-2f68-4ec3-9ede-f29748dfe8ad\") " pod="openstack-operators/infra-operator-index-bhhdv" Jan 31 09:12:37 crc kubenswrapper[4732]: I0131 09:12:37.070677 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-bhhdv"] Jan 31 09:12:37 crc kubenswrapper[4732]: I0131 09:12:37.169516 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh5dj\" (UniqueName: \"kubernetes.io/projected/5357a934-2f68-4ec3-9ede-f29748dfe8ad-kube-api-access-jh5dj\") pod \"infra-operator-index-bhhdv\" (UID: \"5357a934-2f68-4ec3-9ede-f29748dfe8ad\") " pod="openstack-operators/infra-operator-index-bhhdv" Jan 31 09:12:37 crc kubenswrapper[4732]: I0131 09:12:37.223313 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh5dj\" (UniqueName: \"kubernetes.io/projected/5357a934-2f68-4ec3-9ede-f29748dfe8ad-kube-api-access-jh5dj\") pod \"infra-operator-index-bhhdv\" (UID: \"5357a934-2f68-4ec3-9ede-f29748dfe8ad\") " pod="openstack-operators/infra-operator-index-bhhdv" Jan 31 09:12:37 crc kubenswrapper[4732]: I0131 09:12:37.359705 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-bhhdv" Jan 31 09:12:37 crc kubenswrapper[4732]: I0131 09:12:37.605417 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-bhhdv"] Jan 31 09:12:38 crc kubenswrapper[4732]: I0131 09:12:38.048209 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-bhhdv" event={"ID":"5357a934-2f68-4ec3-9ede-f29748dfe8ad","Type":"ContainerStarted","Data":"ff9087630cdd2cde2bf4883162d6628bdcdde4567cbcf522fa76d6568b74b558"} Jan 31 09:12:39 crc kubenswrapper[4732]: I0131 09:12:39.056286 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-bhhdv" event={"ID":"5357a934-2f68-4ec3-9ede-f29748dfe8ad","Type":"ContainerStarted","Data":"f2a48331f1b98db1f366e16fe90fc6e8caf17a9947d5cdc7bafb09547d877d65"} Jan 31 09:12:39 crc kubenswrapper[4732]: I0131 09:12:39.076728 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-bhhdv" podStartSLOduration=1.145731429 podStartE2EDuration="2.076698833s" podCreationTimestamp="2026-01-31 09:12:37 +0000 UTC" firstStartedPulling="2026-01-31 09:12:37.608932826 +0000 UTC m=+695.914809020" lastFinishedPulling="2026-01-31 09:12:38.53990022 +0000 UTC m=+696.845776424" observedRunningTime="2026-01-31 09:12:39.068776183 +0000 UTC m=+697.374652427" watchObservedRunningTime="2026-01-31 09:12:39.076698833 +0000 UTC m=+697.382575077" Jan 31 09:12:40 crc kubenswrapper[4732]: I0131 09:12:40.014104 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-bhhdv"] Jan 31 09:12:40 crc kubenswrapper[4732]: I0131 09:12:40.611732 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-lbfxz"] Jan 31 09:12:40 crc kubenswrapper[4732]: I0131 09:12:40.612338 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-lbfxz" Jan 31 09:12:40 crc kubenswrapper[4732]: I0131 09:12:40.623289 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-lbfxz"] Jan 31 09:12:40 crc kubenswrapper[4732]: I0131 09:12:40.737202 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgj7k\" (UniqueName: \"kubernetes.io/projected/0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf-kube-api-access-jgj7k\") pod \"infra-operator-index-lbfxz\" (UID: \"0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf\") " pod="openstack-operators/infra-operator-index-lbfxz" Jan 31 09:12:40 crc kubenswrapper[4732]: I0131 09:12:40.838601 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgj7k\" (UniqueName: \"kubernetes.io/projected/0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf-kube-api-access-jgj7k\") pod \"infra-operator-index-lbfxz\" (UID: \"0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf\") " pod="openstack-operators/infra-operator-index-lbfxz" Jan 31 09:12:40 crc kubenswrapper[4732]: I0131 09:12:40.867118 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgj7k\" (UniqueName: \"kubernetes.io/projected/0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf-kube-api-access-jgj7k\") pod \"infra-operator-index-lbfxz\" (UID: \"0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf\") " pod="openstack-operators/infra-operator-index-lbfxz" Jan 31 09:12:40 crc kubenswrapper[4732]: I0131 09:12:40.927308 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-lbfxz" Jan 31 09:12:41 crc kubenswrapper[4732]: I0131 09:12:41.068328 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-bhhdv" podUID="5357a934-2f68-4ec3-9ede-f29748dfe8ad" containerName="registry-server" containerID="cri-o://f2a48331f1b98db1f366e16fe90fc6e8caf17a9947d5cdc7bafb09547d877d65" gracePeriod=2 Jan 31 09:12:41 crc kubenswrapper[4732]: I0131 09:12:41.153367 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-lbfxz"] Jan 31 09:12:41 crc kubenswrapper[4732]: I0131 09:12:41.374684 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-bhhdv" Jan 31 09:12:41 crc kubenswrapper[4732]: I0131 09:12:41.453278 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh5dj\" (UniqueName: \"kubernetes.io/projected/5357a934-2f68-4ec3-9ede-f29748dfe8ad-kube-api-access-jh5dj\") pod \"5357a934-2f68-4ec3-9ede-f29748dfe8ad\" (UID: \"5357a934-2f68-4ec3-9ede-f29748dfe8ad\") " Jan 31 09:12:41 crc kubenswrapper[4732]: I0131 09:12:41.458162 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5357a934-2f68-4ec3-9ede-f29748dfe8ad-kube-api-access-jh5dj" (OuterVolumeSpecName: "kube-api-access-jh5dj") pod "5357a934-2f68-4ec3-9ede-f29748dfe8ad" (UID: "5357a934-2f68-4ec3-9ede-f29748dfe8ad"). InnerVolumeSpecName "kube-api-access-jh5dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:12:41 crc kubenswrapper[4732]: I0131 09:12:41.554356 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh5dj\" (UniqueName: \"kubernetes.io/projected/5357a934-2f68-4ec3-9ede-f29748dfe8ad-kube-api-access-jh5dj\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:42 crc kubenswrapper[4732]: I0131 09:12:42.078009 4732 generic.go:334] "Generic (PLEG): container finished" podID="5357a934-2f68-4ec3-9ede-f29748dfe8ad" containerID="f2a48331f1b98db1f366e16fe90fc6e8caf17a9947d5cdc7bafb09547d877d65" exitCode=0 Jan 31 09:12:42 crc kubenswrapper[4732]: I0131 09:12:42.078087 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-bhhdv" event={"ID":"5357a934-2f68-4ec3-9ede-f29748dfe8ad","Type":"ContainerDied","Data":"f2a48331f1b98db1f366e16fe90fc6e8caf17a9947d5cdc7bafb09547d877d65"} Jan 31 09:12:42 crc kubenswrapper[4732]: I0131 09:12:42.078086 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-bhhdv" Jan 31 09:12:42 crc kubenswrapper[4732]: I0131 09:12:42.078118 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-bhhdv" event={"ID":"5357a934-2f68-4ec3-9ede-f29748dfe8ad","Type":"ContainerDied","Data":"ff9087630cdd2cde2bf4883162d6628bdcdde4567cbcf522fa76d6568b74b558"} Jan 31 09:12:42 crc kubenswrapper[4732]: I0131 09:12:42.078140 4732 scope.go:117] "RemoveContainer" containerID="f2a48331f1b98db1f366e16fe90fc6e8caf17a9947d5cdc7bafb09547d877d65" Jan 31 09:12:42 crc kubenswrapper[4732]: I0131 09:12:42.095997 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-lbfxz" event={"ID":"0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf","Type":"ContainerStarted","Data":"910c1830295490a33aa5bee975620416b3ee2fad21058cc6bdfc3765079fd37f"} Jan 31 09:12:42 crc kubenswrapper[4732]: I0131 09:12:42.096080 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-lbfxz" event={"ID":"0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf","Type":"ContainerStarted","Data":"28d499c0e0c3ca32837e0d3a623958320049a25bd1e23f61b0a33ef2f6ce6116"} Jan 31 09:12:42 crc kubenswrapper[4732]: I0131 09:12:42.113387 4732 scope.go:117] "RemoveContainer" containerID="f2a48331f1b98db1f366e16fe90fc6e8caf17a9947d5cdc7bafb09547d877d65" Jan 31 09:12:42 crc kubenswrapper[4732]: E0131 09:12:42.114274 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2a48331f1b98db1f366e16fe90fc6e8caf17a9947d5cdc7bafb09547d877d65\": container with ID starting with f2a48331f1b98db1f366e16fe90fc6e8caf17a9947d5cdc7bafb09547d877d65 not found: ID does not exist" containerID="f2a48331f1b98db1f366e16fe90fc6e8caf17a9947d5cdc7bafb09547d877d65" Jan 31 09:12:42 crc kubenswrapper[4732]: I0131 09:12:42.114350 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2a48331f1b98db1f366e16fe90fc6e8caf17a9947d5cdc7bafb09547d877d65"} err="failed to get container status \"f2a48331f1b98db1f366e16fe90fc6e8caf17a9947d5cdc7bafb09547d877d65\": rpc error: code = NotFound desc = could not find container \"f2a48331f1b98db1f366e16fe90fc6e8caf17a9947d5cdc7bafb09547d877d65\": container with ID starting with f2a48331f1b98db1f366e16fe90fc6e8caf17a9947d5cdc7bafb09547d877d65 not found: ID does not exist" Jan 31 09:12:42 crc kubenswrapper[4732]: I0131 09:12:42.124745 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-lbfxz" podStartSLOduration=1.728920207 podStartE2EDuration="2.124729169s" podCreationTimestamp="2026-01-31 09:12:40 +0000 UTC" firstStartedPulling="2026-01-31 09:12:41.166597315 +0000 UTC m=+699.472473519" lastFinishedPulling="2026-01-31 09:12:41.562406277 +0000 UTC m=+699.868282481" observedRunningTime="2026-01-31 09:12:42.120292188 +0000 UTC m=+700.426168392" watchObservedRunningTime="2026-01-31 09:12:42.124729169 +0000 UTC m=+700.430605373" Jan 31 09:12:42 crc kubenswrapper[4732]: I0131 09:12:42.140057 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-bhhdv"] Jan 31 09:12:42 crc kubenswrapper[4732]: I0131 09:12:42.144275 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-bhhdv"] Jan 31 09:12:42 crc kubenswrapper[4732]: I0131 09:12:42.552763 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5357a934-2f68-4ec3-9ede-f29748dfe8ad" path="/var/lib/kubelet/pods/5357a934-2f68-4ec3-9ede-f29748dfe8ad/volumes" Jan 31 09:12:50 crc kubenswrapper[4732]: I0131 09:12:50.928311 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-lbfxz" Jan 31 09:12:50 crc kubenswrapper[4732]: I0131 09:12:50.929276 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-lbfxz" Jan 31 09:12:50 crc kubenswrapper[4732]: I0131 09:12:50.973859 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-lbfxz" Jan 31 09:12:51 crc kubenswrapper[4732]: I0131 09:12:51.194413 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-lbfxz" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.473395 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg"] Jan 31 09:12:52 crc kubenswrapper[4732]: E0131 09:12:52.473635 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5357a934-2f68-4ec3-9ede-f29748dfe8ad" containerName="registry-server" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.473650 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="5357a934-2f68-4ec3-9ede-f29748dfe8ad" containerName="registry-server" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.473822 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="5357a934-2f68-4ec3-9ede-f29748dfe8ad" containerName="registry-server" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.474755 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.477058 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tnztr" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.497605 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg"] Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.613557 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg\" (UID: \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.613643 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg\" (UID: \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.613700 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d5k6\" (UniqueName: \"kubernetes.io/projected/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-kube-api-access-6d5k6\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg\" (UID: \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.714861 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg\" (UID: \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.714929 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg\" (UID: \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.714954 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d5k6\" (UniqueName: \"kubernetes.io/projected/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-kube-api-access-6d5k6\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg\" (UID: \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.715891 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg\" (UID: \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.715975 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg\" (UID: \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.737015 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d5k6\" (UniqueName: \"kubernetes.io/projected/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-kube-api-access-6d5k6\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg\" (UID: \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" Jan 31 09:12:52 crc kubenswrapper[4732]: I0131 09:12:52.804016 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" Jan 31 09:12:53 crc kubenswrapper[4732]: I0131 09:12:53.223096 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg"] Jan 31 09:12:53 crc kubenswrapper[4732]: W0131 09:12:53.229883 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5ae42fb_86af_4a2d_9570_b3be9f3f8f4a.slice/crio-ef05ff4b2356807e27538526c9ae1fe636dcc9b9ff7b169b5df1c7510b87eaa5 WatchSource:0}: Error finding container ef05ff4b2356807e27538526c9ae1fe636dcc9b9ff7b169b5df1c7510b87eaa5: Status 404 returned error can't find the container with id ef05ff4b2356807e27538526c9ae1fe636dcc9b9ff7b169b5df1c7510b87eaa5 Jan 31 09:12:54 crc kubenswrapper[4732]: I0131 09:12:54.184007 4732 generic.go:334] "Generic (PLEG): container finished" podID="f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" containerID="05dc77d92e7403bddf9c4216d3f9e9927975c83db475a4592b0fe670adbee115" exitCode=0 Jan 31 09:12:54 crc kubenswrapper[4732]: I0131 09:12:54.184173 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" event={"ID":"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a","Type":"ContainerDied","Data":"05dc77d92e7403bddf9c4216d3f9e9927975c83db475a4592b0fe670adbee115"} Jan 31 09:12:54 crc kubenswrapper[4732]: I0131 09:12:54.184330 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" event={"ID":"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a","Type":"ContainerStarted","Data":"ef05ff4b2356807e27538526c9ae1fe636dcc9b9ff7b169b5df1c7510b87eaa5"} Jan 31 09:12:55 crc kubenswrapper[4732]: I0131 09:12:55.194389 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" event={"ID":"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a","Type":"ContainerStarted","Data":"6e1beb997853403436be77394dbddcd82093917a762178f04e455efdc50a1f83"} Jan 31 09:12:56 crc kubenswrapper[4732]: I0131 09:12:56.205334 4732 generic.go:334] "Generic (PLEG): container finished" podID="f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" containerID="6e1beb997853403436be77394dbddcd82093917a762178f04e455efdc50a1f83" exitCode=0 Jan 31 09:12:56 crc kubenswrapper[4732]: I0131 09:12:56.205388 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" event={"ID":"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a","Type":"ContainerDied","Data":"6e1beb997853403436be77394dbddcd82093917a762178f04e455efdc50a1f83"} Jan 31 09:12:57 crc kubenswrapper[4732]: I0131 09:12:57.217708 4732 generic.go:334] "Generic (PLEG): container finished" podID="f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" containerID="a890dd33402e340e3c5622818af3229b97a815440f243204f47c951187c50dbe" exitCode=0 Jan 31 09:12:57 crc kubenswrapper[4732]: I0131 09:12:57.217828 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" event={"ID":"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a","Type":"ContainerDied","Data":"a890dd33402e340e3c5622818af3229b97a815440f243204f47c951187c50dbe"} Jan 31 09:12:58 crc kubenswrapper[4732]: I0131 09:12:58.493545 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" Jan 31 09:12:58 crc kubenswrapper[4732]: I0131 09:12:58.604087 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-util\") pod \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\" (UID: \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\") " Jan 31 09:12:58 crc kubenswrapper[4732]: I0131 09:12:58.604173 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d5k6\" (UniqueName: \"kubernetes.io/projected/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-kube-api-access-6d5k6\") pod \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\" (UID: \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\") " Jan 31 09:12:58 crc kubenswrapper[4732]: I0131 09:12:58.604238 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-bundle\") pod \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\" (UID: \"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a\") " Jan 31 09:12:58 crc kubenswrapper[4732]: I0131 09:12:58.606503 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-bundle" (OuterVolumeSpecName: "bundle") pod "f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" (UID: "f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:12:58 crc kubenswrapper[4732]: I0131 09:12:58.618757 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-util" (OuterVolumeSpecName: "util") pod "f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" (UID: "f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:12:58 crc kubenswrapper[4732]: I0131 09:12:58.706196 4732 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-util\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:58 crc kubenswrapper[4732]: I0131 09:12:58.706233 4732 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:12:59 crc kubenswrapper[4732]: I0131 09:12:59.233524 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" event={"ID":"f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a","Type":"ContainerDied","Data":"ef05ff4b2356807e27538526c9ae1fe636dcc9b9ff7b169b5df1c7510b87eaa5"} Jan 31 09:12:59 crc kubenswrapper[4732]: I0131 09:12:59.233582 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef05ff4b2356807e27538526c9ae1fe636dcc9b9ff7b169b5df1c7510b87eaa5" Jan 31 09:12:59 crc kubenswrapper[4732]: I0131 09:12:59.233697 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg" Jan 31 09:12:59 crc kubenswrapper[4732]: I0131 09:12:59.630197 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-kube-api-access-6d5k6" (OuterVolumeSpecName: "kube-api-access-6d5k6") pod "f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" (UID: "f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a"). InnerVolumeSpecName "kube-api-access-6d5k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:12:59 crc kubenswrapper[4732]: I0131 09:12:59.718896 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d5k6\" (UniqueName: \"kubernetes.io/projected/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a-kube-api-access-6d5k6\") on node \"crc\" DevicePath \"\"" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.328247 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 31 09:13:10 crc kubenswrapper[4732]: E0131 09:13:10.329163 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" containerName="util" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.329180 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" containerName="util" Jan 31 09:13:10 crc kubenswrapper[4732]: E0131 09:13:10.329203 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" containerName="pull" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.329211 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" containerName="pull" Jan 31 09:13:10 crc kubenswrapper[4732]: E0131 09:13:10.329367 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" containerName="extract" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.329380 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" containerName="extract" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.329500 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" containerName="extract" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.330852 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.336496 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-scripts" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.340151 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"kube-root-ca.crt" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.340472 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"galera-openstack-dockercfg-4btb4" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.340579 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-config-data" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.340691 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openshift-service-ca.crt" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.343255 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.344245 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.349362 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.364139 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.365623 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.374355 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.385639 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.475801 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnkrx\" (UniqueName: \"kubernetes.io/projected/616eedfe-830a-4ca8-9c42-a2cfd9352312-kube-api-access-jnkrx\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.475837 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-config-data-default\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.475862 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-kolla-config\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.475881 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-operator-scripts\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.475906 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-kolla-config\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.475923 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.476004 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.476057 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjncs\" (UniqueName: \"kubernetes.io/projected/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-kube-api-access-gjncs\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.476095 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-operator-scripts\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.476134 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0682a582-79d6-4286-9a43-e4a258dde73f-config-data-generated\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.476160 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.476195 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.476239 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616eedfe-830a-4ca8-9c42-a2cfd9352312-config-data-generated\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.476288 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xg64\" (UniqueName: \"kubernetes.io/projected/0682a582-79d6-4286-9a43-e4a258dde73f-kube-api-access-9xg64\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.476363 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.476392 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-kolla-config\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.476446 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-config-data-default\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.476471 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-config-data-default\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578149 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xg64\" (UniqueName: \"kubernetes.io/projected/0682a582-79d6-4286-9a43-e4a258dde73f-kube-api-access-9xg64\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578214 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578236 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-kolla-config\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578270 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-config-data-default\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578295 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-config-data-default\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578336 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnkrx\" (UniqueName: \"kubernetes.io/projected/616eedfe-830a-4ca8-9c42-a2cfd9352312-kube-api-access-jnkrx\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578359 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-config-data-default\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578381 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-kolla-config\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578407 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-operator-scripts\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578439 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-kolla-config\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578464 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578488 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578509 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjncs\" (UniqueName: \"kubernetes.io/projected/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-kube-api-access-gjncs\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578532 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-operator-scripts\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578555 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0682a582-79d6-4286-9a43-e4a258dde73f-config-data-generated\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578576 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578606 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.578644 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616eedfe-830a-4ca8-9c42-a2cfd9352312-config-data-generated\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.579108 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-kolla-config\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.579148 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616eedfe-830a-4ca8-9c42-a2cfd9352312-config-data-generated\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.579249 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-config-data-default\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.579449 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") device mount path \"/mnt/openstack/pv01\"" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.579809 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0682a582-79d6-4286-9a43-e4a258dde73f-config-data-generated\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.580243 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-config-data-default\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.580273 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-kolla-config\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.580395 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") device mount path \"/mnt/openstack/pv05\"" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.580576 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.580685 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-kolla-config\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.580845 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-config-data-default\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.581006 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.581008 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") device mount path \"/mnt/openstack/pv04\"" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.581974 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-operator-scripts\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.582867 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-operator-scripts\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.599278 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.600037 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.606120 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xg64\" (UniqueName: \"kubernetes.io/projected/0682a582-79d6-4286-9a43-e4a258dde73f-kube-api-access-9xg64\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.606171 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjncs\" (UniqueName: \"kubernetes.io/projected/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-kube-api-access-gjncs\") pod \"openstack-galera-2\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.609401 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-1\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.610975 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnkrx\" (UniqueName: \"kubernetes.io/projected/616eedfe-830a-4ca8-9c42-a2cfd9352312-kube-api-access-jnkrx\") pod \"openstack-galera-0\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.659755 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.665351 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.685371 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.890567 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms"] Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.892072 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.895220 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.895571 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8kdwd" Jan 31 09:13:10 crc kubenswrapper[4732]: I0131 09:13:10.916057 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms"] Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.018997 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp7fg\" (UniqueName: \"kubernetes.io/projected/415e013e-ff2f-47b6-a17d-c0ba8f80071a-kube-api-access-vp7fg\") pod \"infra-operator-controller-manager-7f868546f6-qkfms\" (UID: \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\") " pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.019055 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/415e013e-ff2f-47b6-a17d-c0ba8f80071a-webhook-cert\") pod \"infra-operator-controller-manager-7f868546f6-qkfms\" (UID: \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\") " pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.019087 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/415e013e-ff2f-47b6-a17d-c0ba8f80071a-apiservice-cert\") pod \"infra-operator-controller-manager-7f868546f6-qkfms\" (UID: \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\") " pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.093391 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 31 09:13:11 crc kubenswrapper[4732]: W0131 09:13:11.099478 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7eb0179_b292_4a09_a07d_3d9bfe7978f3.slice/crio-a51f6b605ece49b9fb83731b6d6e01366d3ebfefee0aa6fe01b529f67148475c WatchSource:0}: Error finding container a51f6b605ece49b9fb83731b6d6e01366d3ebfefee0aa6fe01b529f67148475c: Status 404 returned error can't find the container with id a51f6b605ece49b9fb83731b6d6e01366d3ebfefee0aa6fe01b529f67148475c Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.120756 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/415e013e-ff2f-47b6-a17d-c0ba8f80071a-webhook-cert\") pod \"infra-operator-controller-manager-7f868546f6-qkfms\" (UID: \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\") " pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.120809 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/415e013e-ff2f-47b6-a17d-c0ba8f80071a-apiservice-cert\") pod \"infra-operator-controller-manager-7f868546f6-qkfms\" (UID: \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\") " pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.120871 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp7fg\" (UniqueName: \"kubernetes.io/projected/415e013e-ff2f-47b6-a17d-c0ba8f80071a-kube-api-access-vp7fg\") pod \"infra-operator-controller-manager-7f868546f6-qkfms\" (UID: \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\") " pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.126212 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/415e013e-ff2f-47b6-a17d-c0ba8f80071a-apiservice-cert\") pod \"infra-operator-controller-manager-7f868546f6-qkfms\" (UID: \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\") " pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.126324 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/415e013e-ff2f-47b6-a17d-c0ba8f80071a-webhook-cert\") pod \"infra-operator-controller-manager-7f868546f6-qkfms\" (UID: \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\") " pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.138005 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp7fg\" (UniqueName: \"kubernetes.io/projected/415e013e-ff2f-47b6-a17d-c0ba8f80071a-kube-api-access-vp7fg\") pod \"infra-operator-controller-manager-7f868546f6-qkfms\" (UID: \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\") " pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.223068 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.243800 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.266215 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 31 09:13:11 crc kubenswrapper[4732]: W0131 09:13:11.279610 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0682a582_79d6_4286_9a43_e4a258dde73f.slice/crio-e0837550f45b62d919804bb06cc6ab1d6a363eb752c3707a21645f710833b1bc WatchSource:0}: Error finding container e0837550f45b62d919804bb06cc6ab1d6a363eb752c3707a21645f710833b1bc: Status 404 returned error can't find the container with id e0837550f45b62d919804bb06cc6ab1d6a363eb752c3707a21645f710833b1bc Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.308614 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f7eb0179-b292-4a09-a07d-3d9bfe7978f3","Type":"ContainerStarted","Data":"a51f6b605ece49b9fb83731b6d6e01366d3ebfefee0aa6fe01b529f67148475c"} Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.310122 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"616eedfe-830a-4ca8-9c42-a2cfd9352312","Type":"ContainerStarted","Data":"2da8aa3ed8596e5beb1462be0a364f515a0e7e35f648a1fd6f1e41dd41f084dd"} Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.311260 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"0682a582-79d6-4286-9a43-e4a258dde73f","Type":"ContainerStarted","Data":"e0837550f45b62d919804bb06cc6ab1d6a363eb752c3707a21645f710833b1bc"} Jan 31 09:13:11 crc kubenswrapper[4732]: I0131 09:13:11.428014 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms"] Jan 31 09:13:12 crc kubenswrapper[4732]: I0131 09:13:12.319673 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" event={"ID":"415e013e-ff2f-47b6-a17d-c0ba8f80071a","Type":"ContainerStarted","Data":"ba8a3b878f56e9627430ea250d8fbd2f4a60f48d4ce391e390485cb7a6931e6c"} Jan 31 09:13:17 crc kubenswrapper[4732]: I0131 09:13:17.497849 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:13:17 crc kubenswrapper[4732]: I0131 09:13:17.498422 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:13:19 crc kubenswrapper[4732]: I0131 09:13:19.367307 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" event={"ID":"415e013e-ff2f-47b6-a17d-c0ba8f80071a","Type":"ContainerStarted","Data":"b7c1c08d282d712f9e0a65331b190ff2ebed938312c6f27e081d2971e6619b22"} Jan 31 09:13:19 crc kubenswrapper[4732]: I0131 09:13:19.368878 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:13:19 crc kubenswrapper[4732]: I0131 09:13:19.397581 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" podStartSLOduration=1.721660443 podStartE2EDuration="9.397560247s" podCreationTimestamp="2026-01-31 09:13:10 +0000 UTC" firstStartedPulling="2026-01-31 09:13:11.441190366 +0000 UTC m=+729.747066570" lastFinishedPulling="2026-01-31 09:13:19.11709017 +0000 UTC m=+737.422966374" observedRunningTime="2026-01-31 09:13:19.386062478 +0000 UTC m=+737.691938682" watchObservedRunningTime="2026-01-31 09:13:19.397560247 +0000 UTC m=+737.703436451" Jan 31 09:13:20 crc kubenswrapper[4732]: I0131 09:13:20.373296 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"0682a582-79d6-4286-9a43-e4a258dde73f","Type":"ContainerStarted","Data":"ce97edc32bdb9fb197af077948d670e3a55f9f7d022545ce9a9c32c0c8222279"} Jan 31 09:13:20 crc kubenswrapper[4732]: I0131 09:13:20.374438 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f7eb0179-b292-4a09-a07d-3d9bfe7978f3","Type":"ContainerStarted","Data":"e5b6220dc9c37c2ae0ded221270ed7b6d934a81efa02b526750d6847a367f930"} Jan 31 09:13:20 crc kubenswrapper[4732]: I0131 09:13:20.375411 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"616eedfe-830a-4ca8-9c42-a2cfd9352312","Type":"ContainerStarted","Data":"492d45e71fac2738ff1d26d6c415b6dfd2b83adc568c28c60ba8fc993992d5c4"} Jan 31 09:13:23 crc kubenswrapper[4732]: E0131 09:13:23.172889 4732 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7eb0179_b292_4a09_a07d_3d9bfe7978f3.slice/crio-e5b6220dc9c37c2ae0ded221270ed7b6d934a81efa02b526750d6847a367f930.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7eb0179_b292_4a09_a07d_3d9bfe7978f3.slice/crio-conmon-e5b6220dc9c37c2ae0ded221270ed7b6d934a81efa02b526750d6847a367f930.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:13:23 crc kubenswrapper[4732]: I0131 09:13:23.400818 4732 generic.go:334] "Generic (PLEG): container finished" podID="f7eb0179-b292-4a09-a07d-3d9bfe7978f3" containerID="e5b6220dc9c37c2ae0ded221270ed7b6d934a81efa02b526750d6847a367f930" exitCode=0 Jan 31 09:13:23 crc kubenswrapper[4732]: I0131 09:13:23.401334 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f7eb0179-b292-4a09-a07d-3d9bfe7978f3","Type":"ContainerDied","Data":"e5b6220dc9c37c2ae0ded221270ed7b6d934a81efa02b526750d6847a367f930"} Jan 31 09:13:23 crc kubenswrapper[4732]: I0131 09:13:23.409270 4732 generic.go:334] "Generic (PLEG): container finished" podID="616eedfe-830a-4ca8-9c42-a2cfd9352312" containerID="492d45e71fac2738ff1d26d6c415b6dfd2b83adc568c28c60ba8fc993992d5c4" exitCode=0 Jan 31 09:13:23 crc kubenswrapper[4732]: I0131 09:13:23.409437 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"616eedfe-830a-4ca8-9c42-a2cfd9352312","Type":"ContainerDied","Data":"492d45e71fac2738ff1d26d6c415b6dfd2b83adc568c28c60ba8fc993992d5c4"} Jan 31 09:13:23 crc kubenswrapper[4732]: I0131 09:13:23.417120 4732 generic.go:334] "Generic (PLEG): container finished" podID="0682a582-79d6-4286-9a43-e4a258dde73f" containerID="ce97edc32bdb9fb197af077948d670e3a55f9f7d022545ce9a9c32c0c8222279" exitCode=0 Jan 31 09:13:23 crc kubenswrapper[4732]: I0131 09:13:23.417197 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"0682a582-79d6-4286-9a43-e4a258dde73f","Type":"ContainerDied","Data":"ce97edc32bdb9fb197af077948d670e3a55f9f7d022545ce9a9c32c0c8222279"} Jan 31 09:13:24 crc kubenswrapper[4732]: I0131 09:13:24.432556 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"0682a582-79d6-4286-9a43-e4a258dde73f","Type":"ContainerStarted","Data":"ee4de6ba1b0a29ba82881c9260150c9bf01cda625efde0b6a945316636e5517e"} Jan 31 09:13:24 crc kubenswrapper[4732]: I0131 09:13:24.436735 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f7eb0179-b292-4a09-a07d-3d9bfe7978f3","Type":"ContainerStarted","Data":"bf4dd15bf1d89a707a1cc67b69c360511a62db3a9d76bcd538b729b470197a50"} Jan 31 09:13:24 crc kubenswrapper[4732]: I0131 09:13:24.443194 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"616eedfe-830a-4ca8-9c42-a2cfd9352312","Type":"ContainerStarted","Data":"8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511"} Jan 31 09:13:24 crc kubenswrapper[4732]: I0131 09:13:24.451527 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-1" podStartSLOduration=7.446527402 podStartE2EDuration="15.451507717s" podCreationTimestamp="2026-01-31 09:13:09 +0000 UTC" firstStartedPulling="2026-01-31 09:13:11.282872424 +0000 UTC m=+729.588748618" lastFinishedPulling="2026-01-31 09:13:19.287852739 +0000 UTC m=+737.593728933" observedRunningTime="2026-01-31 09:13:24.44935788 +0000 UTC m=+742.755234084" watchObservedRunningTime="2026-01-31 09:13:24.451507717 +0000 UTC m=+742.757383921" Jan 31 09:13:24 crc kubenswrapper[4732]: I0131 09:13:24.477902 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-2" podStartSLOduration=7.25002611 podStartE2EDuration="15.477885148s" podCreationTimestamp="2026-01-31 09:13:09 +0000 UTC" firstStartedPulling="2026-01-31 09:13:11.101560606 +0000 UTC m=+729.407436810" lastFinishedPulling="2026-01-31 09:13:19.329419644 +0000 UTC m=+737.635295848" observedRunningTime="2026-01-31 09:13:24.474608046 +0000 UTC m=+742.780484250" watchObservedRunningTime="2026-01-31 09:13:24.477885148 +0000 UTC m=+742.783761352" Jan 31 09:13:24 crc kubenswrapper[4732]: I0131 09:13:24.494925 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-0" podStartSLOduration=7.47851855 podStartE2EDuration="15.494909539s" podCreationTimestamp="2026-01-31 09:13:09 +0000 UTC" firstStartedPulling="2026-01-31 09:13:11.261946043 +0000 UTC m=+729.567822247" lastFinishedPulling="2026-01-31 09:13:19.278337032 +0000 UTC m=+737.584213236" observedRunningTime="2026-01-31 09:13:24.49302585 +0000 UTC m=+742.798902054" watchObservedRunningTime="2026-01-31 09:13:24.494909539 +0000 UTC m=+742.800785753" Jan 31 09:13:24 crc kubenswrapper[4732]: E0131 09:13:24.577556 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:24 crc kubenswrapper[4732]: E0131 09:13:24.589705 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:24 crc kubenswrapper[4732]: E0131 09:13:24.602732 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:27 crc kubenswrapper[4732]: E0131 09:13:27.680270 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:27 crc kubenswrapper[4732]: E0131 09:13:27.703570 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:27 crc kubenswrapper[4732]: E0131 09:13:27.723524 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:30 crc kubenswrapper[4732]: I0131 09:13:30.660912 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:30 crc kubenswrapper[4732]: I0131 09:13:30.662485 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:30 crc kubenswrapper[4732]: I0131 09:13:30.665767 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:30 crc kubenswrapper[4732]: I0131 09:13:30.665967 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:13:30 crc kubenswrapper[4732]: I0131 09:13:30.686482 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:30 crc kubenswrapper[4732]: I0131 09:13:30.686851 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:30 crc kubenswrapper[4732]: E0131 09:13:30.767573 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:30 crc kubenswrapper[4732]: E0131 09:13:30.784296 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:30 crc kubenswrapper[4732]: E0131 09:13:30.802327 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:31 crc kubenswrapper[4732]: I0131 09:13:31.228112 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.084491 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.085193 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.087217 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"memcached-memcached-dockercfg-grf6g" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.087377 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"memcached-config-data" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.096245 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.240253 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0d4fa62-a33c-4ab2-a446-697994c1541e-kolla-config\") pod \"memcached-0\" (UID: \"c0d4fa62-a33c-4ab2-a446-697994c1541e\") " pod="swift-kuttl-tests/memcached-0" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.240381 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvmz6\" (UniqueName: \"kubernetes.io/projected/c0d4fa62-a33c-4ab2-a446-697994c1541e-kube-api-access-rvmz6\") pod \"memcached-0\" (UID: \"c0d4fa62-a33c-4ab2-a446-697994c1541e\") " pod="swift-kuttl-tests/memcached-0" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.240416 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c0d4fa62-a33c-4ab2-a446-697994c1541e-config-data\") pod \"memcached-0\" (UID: \"c0d4fa62-a33c-4ab2-a446-697994c1541e\") " pod="swift-kuttl-tests/memcached-0" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.341783 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvmz6\" (UniqueName: \"kubernetes.io/projected/c0d4fa62-a33c-4ab2-a446-697994c1541e-kube-api-access-rvmz6\") pod \"memcached-0\" (UID: \"c0d4fa62-a33c-4ab2-a446-697994c1541e\") " pod="swift-kuttl-tests/memcached-0" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.341951 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c0d4fa62-a33c-4ab2-a446-697994c1541e-config-data\") pod \"memcached-0\" (UID: \"c0d4fa62-a33c-4ab2-a446-697994c1541e\") " pod="swift-kuttl-tests/memcached-0" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.342073 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0d4fa62-a33c-4ab2-a446-697994c1541e-kolla-config\") pod \"memcached-0\" (UID: \"c0d4fa62-a33c-4ab2-a446-697994c1541e\") " pod="swift-kuttl-tests/memcached-0" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.343049 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0d4fa62-a33c-4ab2-a446-697994c1541e-kolla-config\") pod \"memcached-0\" (UID: \"c0d4fa62-a33c-4ab2-a446-697994c1541e\") " pod="swift-kuttl-tests/memcached-0" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.343181 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c0d4fa62-a33c-4ab2-a446-697994c1541e-config-data\") pod \"memcached-0\" (UID: \"c0d4fa62-a33c-4ab2-a446-697994c1541e\") " pod="swift-kuttl-tests/memcached-0" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.365524 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvmz6\" (UniqueName: \"kubernetes.io/projected/c0d4fa62-a33c-4ab2-a446-697994c1541e-kube-api-access-rvmz6\") pod \"memcached-0\" (UID: \"c0d4fa62-a33c-4ab2-a446-697994c1541e\") " pod="swift-kuttl-tests/memcached-0" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.401677 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.635197 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.845986 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-lrnjt"] Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.846739 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-lrnjt" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.849043 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-dzl44" Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.860508 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-lrnjt"] Jan 31 09:13:32 crc kubenswrapper[4732]: I0131 09:13:32.951146 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5gcb\" (UniqueName: \"kubernetes.io/projected/908a1c63-d3ff-4714-a13a-788ad05f37f7-kube-api-access-h5gcb\") pod \"rabbitmq-cluster-operator-index-lrnjt\" (UID: \"908a1c63-d3ff-4714-a13a-788ad05f37f7\") " pod="openstack-operators/rabbitmq-cluster-operator-index-lrnjt" Jan 31 09:13:33 crc kubenswrapper[4732]: I0131 09:13:33.052831 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5gcb\" (UniqueName: \"kubernetes.io/projected/908a1c63-d3ff-4714-a13a-788ad05f37f7-kube-api-access-h5gcb\") pod \"rabbitmq-cluster-operator-index-lrnjt\" (UID: \"908a1c63-d3ff-4714-a13a-788ad05f37f7\") " pod="openstack-operators/rabbitmq-cluster-operator-index-lrnjt" Jan 31 09:13:33 crc kubenswrapper[4732]: I0131 09:13:33.069397 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5gcb\" (UniqueName: \"kubernetes.io/projected/908a1c63-d3ff-4714-a13a-788ad05f37f7-kube-api-access-h5gcb\") pod \"rabbitmq-cluster-operator-index-lrnjt\" (UID: \"908a1c63-d3ff-4714-a13a-788ad05f37f7\") " pod="openstack-operators/rabbitmq-cluster-operator-index-lrnjt" Jan 31 09:13:33 crc kubenswrapper[4732]: I0131 09:13:33.164397 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-lrnjt" Jan 31 09:13:33 crc kubenswrapper[4732]: I0131 09:13:33.521655 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"c0d4fa62-a33c-4ab2-a446-697994c1541e","Type":"ContainerStarted","Data":"d82037df40928405007453003da8d4a3924f1437dbd107f02f6b845354809fb0"} Jan 31 09:13:33 crc kubenswrapper[4732]: I0131 09:13:33.589200 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-lrnjt"] Jan 31 09:13:33 crc kubenswrapper[4732]: W0131 09:13:33.597534 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod908a1c63_d3ff_4714_a13a_788ad05f37f7.slice/crio-b0f9604746eebb998ff2881e738d249d0d1324f93e1f0daf2a43d6211251d2fd WatchSource:0}: Error finding container b0f9604746eebb998ff2881e738d249d0d1324f93e1f0daf2a43d6211251d2fd: Status 404 returned error can't find the container with id b0f9604746eebb998ff2881e738d249d0d1324f93e1f0daf2a43d6211251d2fd Jan 31 09:13:33 crc kubenswrapper[4732]: E0131 09:13:33.840264 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:33 crc kubenswrapper[4732]: E0131 09:13:33.852550 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:33 crc kubenswrapper[4732]: E0131 09:13:33.867532 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:34 crc kubenswrapper[4732]: I0131 09:13:34.529012 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-lrnjt" event={"ID":"908a1c63-d3ff-4714-a13a-788ad05f37f7","Type":"ContainerStarted","Data":"b0f9604746eebb998ff2881e738d249d0d1324f93e1f0daf2a43d6211251d2fd"} Jan 31 09:13:35 crc kubenswrapper[4732]: I0131 09:13:35.537457 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"c0d4fa62-a33c-4ab2-a446-697994c1541e","Type":"ContainerStarted","Data":"3bead2528c2ca5acd2f850812b4f5c0dc70486b1bf295a4a4a607b3a99c2b59e"} Jan 31 09:13:35 crc kubenswrapper[4732]: I0131 09:13:35.538924 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/memcached-0" Jan 31 09:13:35 crc kubenswrapper[4732]: I0131 09:13:35.561216 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/memcached-0" podStartSLOduration=1.620183739 podStartE2EDuration="3.561090059s" podCreationTimestamp="2026-01-31 09:13:32 +0000 UTC" firstStartedPulling="2026-01-31 09:13:32.639237214 +0000 UTC m=+750.945113418" lastFinishedPulling="2026-01-31 09:13:34.580143524 +0000 UTC m=+752.886019738" observedRunningTime="2026-01-31 09:13:35.557423006 +0000 UTC m=+753.863299220" watchObservedRunningTime="2026-01-31 09:13:35.561090059 +0000 UTC m=+753.866966273" Jan 31 09:13:36 crc kubenswrapper[4732]: E0131 09:13:36.937928 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:36 crc kubenswrapper[4732]: E0131 09:13:36.963735 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:36 crc kubenswrapper[4732]: E0131 09:13:36.988073 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:37 crc kubenswrapper[4732]: I0131 09:13:37.034148 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-lrnjt"] Jan 31 09:13:37 crc kubenswrapper[4732]: I0131 09:13:37.647582 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-hbtxq"] Jan 31 09:13:37 crc kubenswrapper[4732]: I0131 09:13:37.648865 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" Jan 31 09:13:37 crc kubenswrapper[4732]: I0131 09:13:37.665890 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-hbtxq"] Jan 31 09:13:37 crc kubenswrapper[4732]: I0131 09:13:37.712951 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdnc7\" (UniqueName: \"kubernetes.io/projected/9ff1fd2b-a8cb-4050-9a54-3117be6964ce-kube-api-access-tdnc7\") pod \"rabbitmq-cluster-operator-index-hbtxq\" (UID: \"9ff1fd2b-a8cb-4050-9a54-3117be6964ce\") " pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" Jan 31 09:13:37 crc kubenswrapper[4732]: I0131 09:13:37.814093 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdnc7\" (UniqueName: \"kubernetes.io/projected/9ff1fd2b-a8cb-4050-9a54-3117be6964ce-kube-api-access-tdnc7\") pod \"rabbitmq-cluster-operator-index-hbtxq\" (UID: \"9ff1fd2b-a8cb-4050-9a54-3117be6964ce\") " pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" Jan 31 09:13:37 crc kubenswrapper[4732]: I0131 09:13:37.831246 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdnc7\" (UniqueName: \"kubernetes.io/projected/9ff1fd2b-a8cb-4050-9a54-3117be6964ce-kube-api-access-tdnc7\") pod \"rabbitmq-cluster-operator-index-hbtxq\" (UID: \"9ff1fd2b-a8cb-4050-9a54-3117be6964ce\") " pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" Jan 31 09:13:37 crc kubenswrapper[4732]: I0131 09:13:37.966038 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" Jan 31 09:13:38 crc kubenswrapper[4732]: I0131 09:13:38.480270 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-hbtxq"] Jan 31 09:13:38 crc kubenswrapper[4732]: W0131 09:13:38.483392 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ff1fd2b_a8cb_4050_9a54_3117be6964ce.slice/crio-5f27d35f0d0db33259c32fd8459b7826ea94c3c12989e06661b903a56351ab64 WatchSource:0}: Error finding container 5f27d35f0d0db33259c32fd8459b7826ea94c3c12989e06661b903a56351ab64: Status 404 returned error can't find the container with id 5f27d35f0d0db33259c32fd8459b7826ea94c3c12989e06661b903a56351ab64 Jan 31 09:13:38 crc kubenswrapper[4732]: I0131 09:13:38.562535 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-lrnjt" event={"ID":"908a1c63-d3ff-4714-a13a-788ad05f37f7","Type":"ContainerStarted","Data":"b6942d86c2492a85d9f2a17fb0552e77a713b2c97f4c20d64ac38f0270e022da"} Jan 31 09:13:38 crc kubenswrapper[4732]: I0131 09:13:38.562615 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-lrnjt" podUID="908a1c63-d3ff-4714-a13a-788ad05f37f7" containerName="registry-server" containerID="cri-o://b6942d86c2492a85d9f2a17fb0552e77a713b2c97f4c20d64ac38f0270e022da" gracePeriod=2 Jan 31 09:13:38 crc kubenswrapper[4732]: I0131 09:13:38.563590 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" event={"ID":"9ff1fd2b-a8cb-4050-9a54-3117be6964ce","Type":"ContainerStarted","Data":"5f27d35f0d0db33259c32fd8459b7826ea94c3c12989e06661b903a56351ab64"} Jan 31 09:13:38 crc kubenswrapper[4732]: I0131 09:13:38.587993 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-lrnjt" podStartSLOduration=2.79704732 podStartE2EDuration="6.587970077s" podCreationTimestamp="2026-01-31 09:13:32 +0000 UTC" firstStartedPulling="2026-01-31 09:13:33.601380445 +0000 UTC m=+751.907256649" lastFinishedPulling="2026-01-31 09:13:37.392303202 +0000 UTC m=+755.698179406" observedRunningTime="2026-01-31 09:13:38.586163241 +0000 UTC m=+756.892039455" watchObservedRunningTime="2026-01-31 09:13:38.587970077 +0000 UTC m=+756.893846281" Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.053499 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-lrnjt" Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.136272 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5gcb\" (UniqueName: \"kubernetes.io/projected/908a1c63-d3ff-4714-a13a-788ad05f37f7-kube-api-access-h5gcb\") pod \"908a1c63-d3ff-4714-a13a-788ad05f37f7\" (UID: \"908a1c63-d3ff-4714-a13a-788ad05f37f7\") " Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.144991 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908a1c63-d3ff-4714-a13a-788ad05f37f7-kube-api-access-h5gcb" (OuterVolumeSpecName: "kube-api-access-h5gcb") pod "908a1c63-d3ff-4714-a13a-788ad05f37f7" (UID: "908a1c63-d3ff-4714-a13a-788ad05f37f7"). InnerVolumeSpecName "kube-api-access-h5gcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.238472 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5gcb\" (UniqueName: \"kubernetes.io/projected/908a1c63-d3ff-4714-a13a-788ad05f37f7-kube-api-access-h5gcb\") on node \"crc\" DevicePath \"\"" Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.570630 4732 generic.go:334] "Generic (PLEG): container finished" podID="908a1c63-d3ff-4714-a13a-788ad05f37f7" containerID="b6942d86c2492a85d9f2a17fb0552e77a713b2c97f4c20d64ac38f0270e022da" exitCode=0 Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.570698 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-lrnjt" Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.570698 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-lrnjt" event={"ID":"908a1c63-d3ff-4714-a13a-788ad05f37f7","Type":"ContainerDied","Data":"b6942d86c2492a85d9f2a17fb0552e77a713b2c97f4c20d64ac38f0270e022da"} Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.570773 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-lrnjt" event={"ID":"908a1c63-d3ff-4714-a13a-788ad05f37f7","Type":"ContainerDied","Data":"b0f9604746eebb998ff2881e738d249d0d1324f93e1f0daf2a43d6211251d2fd"} Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.570795 4732 scope.go:117] "RemoveContainer" containerID="b6942d86c2492a85d9f2a17fb0552e77a713b2c97f4c20d64ac38f0270e022da" Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.573004 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" event={"ID":"9ff1fd2b-a8cb-4050-9a54-3117be6964ce","Type":"ContainerStarted","Data":"fe9b798d0d4e674d4b68e134ab59cf425b1a6014efda517ffd63f1ccf40b11f0"} Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.599116 4732 scope.go:117] "RemoveContainer" containerID="b6942d86c2492a85d9f2a17fb0552e77a713b2c97f4c20d64ac38f0270e022da" Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.599082 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" podStartSLOduration=2.186065468 podStartE2EDuration="2.599061663s" podCreationTimestamp="2026-01-31 09:13:37 +0000 UTC" firstStartedPulling="2026-01-31 09:13:38.487004892 +0000 UTC m=+756.792881096" lastFinishedPulling="2026-01-31 09:13:38.900001087 +0000 UTC m=+757.205877291" observedRunningTime="2026-01-31 09:13:39.598823685 +0000 UTC m=+757.904699889" watchObservedRunningTime="2026-01-31 09:13:39.599061663 +0000 UTC m=+757.904937867" Jan 31 09:13:39 crc kubenswrapper[4732]: E0131 09:13:39.601136 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6942d86c2492a85d9f2a17fb0552e77a713b2c97f4c20d64ac38f0270e022da\": container with ID starting with b6942d86c2492a85d9f2a17fb0552e77a713b2c97f4c20d64ac38f0270e022da not found: ID does not exist" containerID="b6942d86c2492a85d9f2a17fb0552e77a713b2c97f4c20d64ac38f0270e022da" Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.601186 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6942d86c2492a85d9f2a17fb0552e77a713b2c97f4c20d64ac38f0270e022da"} err="failed to get container status \"b6942d86c2492a85d9f2a17fb0552e77a713b2c97f4c20d64ac38f0270e022da\": rpc error: code = NotFound desc = could not find container \"b6942d86c2492a85d9f2a17fb0552e77a713b2c97f4c20d64ac38f0270e022da\": container with ID starting with b6942d86c2492a85d9f2a17fb0552e77a713b2c97f4c20d64ac38f0270e022da not found: ID does not exist" Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.613366 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-lrnjt"] Jan 31 09:13:39 crc kubenswrapper[4732]: I0131 09:13:39.622207 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-lrnjt"] Jan 31 09:13:40 crc kubenswrapper[4732]: E0131 09:13:40.025461 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:40 crc kubenswrapper[4732]: E0131 09:13:40.037161 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:40 crc kubenswrapper[4732]: E0131 09:13:40.050497 4732 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=6255804368796246868, SKID=, AKID=4E:48:BE:F2:C8:02:40:69:34:FD:3B:08:7C:67:11:C4:20:0F:82:02 failed: x509: certificate signed by unknown authority" Jan 31 09:13:40 crc kubenswrapper[4732]: I0131 09:13:40.550121 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="908a1c63-d3ff-4714-a13a-788ad05f37f7" path="/var/lib/kubelet/pods/908a1c63-d3ff-4714-a13a-788ad05f37f7/volumes" Jan 31 09:13:41 crc kubenswrapper[4732]: I0131 09:13:41.615949 4732 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 09:13:42 crc kubenswrapper[4732]: I0131 09:13:42.403820 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/memcached-0" Jan 31 09:13:47 crc kubenswrapper[4732]: I0131 09:13:47.497875 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:13:47 crc kubenswrapper[4732]: I0131 09:13:47.499420 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:13:47 crc kubenswrapper[4732]: I0131 09:13:47.967020 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" Jan 31 09:13:47 crc kubenswrapper[4732]: I0131 09:13:47.967374 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" Jan 31 09:13:48 crc kubenswrapper[4732]: I0131 09:13:48.009470 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" Jan 31 09:13:48 crc kubenswrapper[4732]: I0131 09:13:48.667683 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" Jan 31 09:13:48 crc kubenswrapper[4732]: I0131 09:13:48.877468 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:48 crc kubenswrapper[4732]: I0131 09:13:48.973502 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:13:49 crc kubenswrapper[4732]: E0131 09:13:49.132787 4732 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.231:56272->38.129.56.231:32957: read tcp 38.129.56.231:56272->38.129.56.231:32957: read: connection reset by peer Jan 31 09:13:49 crc kubenswrapper[4732]: I0131 09:13:49.414128 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/root-account-create-update-pjmfd"] Jan 31 09:13:49 crc kubenswrapper[4732]: E0131 09:13:49.414462 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908a1c63-d3ff-4714-a13a-788ad05f37f7" containerName="registry-server" Jan 31 09:13:49 crc kubenswrapper[4732]: I0131 09:13:49.414482 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="908a1c63-d3ff-4714-a13a-788ad05f37f7" containerName="registry-server" Jan 31 09:13:49 crc kubenswrapper[4732]: I0131 09:13:49.414732 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="908a1c63-d3ff-4714-a13a-788ad05f37f7" containerName="registry-server" Jan 31 09:13:49 crc kubenswrapper[4732]: I0131 09:13:49.415359 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-pjmfd" Jan 31 09:13:49 crc kubenswrapper[4732]: I0131 09:13:49.419495 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-pjmfd"] Jan 31 09:13:49 crc kubenswrapper[4732]: I0131 09:13:49.420230 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 31 09:13:49 crc kubenswrapper[4732]: I0131 09:13:49.503914 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d8a630-8f89-44aa-9f24-2f1b279cccfd-operator-scripts\") pod \"root-account-create-update-pjmfd\" (UID: \"51d8a630-8f89-44aa-9f24-2f1b279cccfd\") " pod="swift-kuttl-tests/root-account-create-update-pjmfd" Jan 31 09:13:49 crc kubenswrapper[4732]: I0131 09:13:49.504079 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s867q\" (UniqueName: \"kubernetes.io/projected/51d8a630-8f89-44aa-9f24-2f1b279cccfd-kube-api-access-s867q\") pod \"root-account-create-update-pjmfd\" (UID: \"51d8a630-8f89-44aa-9f24-2f1b279cccfd\") " pod="swift-kuttl-tests/root-account-create-update-pjmfd" Jan 31 09:13:49 crc kubenswrapper[4732]: I0131 09:13:49.605580 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s867q\" (UniqueName: \"kubernetes.io/projected/51d8a630-8f89-44aa-9f24-2f1b279cccfd-kube-api-access-s867q\") pod \"root-account-create-update-pjmfd\" (UID: \"51d8a630-8f89-44aa-9f24-2f1b279cccfd\") " pod="swift-kuttl-tests/root-account-create-update-pjmfd" Jan 31 09:13:49 crc kubenswrapper[4732]: I0131 09:13:49.605641 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d8a630-8f89-44aa-9f24-2f1b279cccfd-operator-scripts\") pod \"root-account-create-update-pjmfd\" (UID: \"51d8a630-8f89-44aa-9f24-2f1b279cccfd\") " pod="swift-kuttl-tests/root-account-create-update-pjmfd" Jan 31 09:13:49 crc kubenswrapper[4732]: I0131 09:13:49.606565 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d8a630-8f89-44aa-9f24-2f1b279cccfd-operator-scripts\") pod \"root-account-create-update-pjmfd\" (UID: \"51d8a630-8f89-44aa-9f24-2f1b279cccfd\") " pod="swift-kuttl-tests/root-account-create-update-pjmfd" Jan 31 09:13:49 crc kubenswrapper[4732]: I0131 09:13:49.629284 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s867q\" (UniqueName: \"kubernetes.io/projected/51d8a630-8f89-44aa-9f24-2f1b279cccfd-kube-api-access-s867q\") pod \"root-account-create-update-pjmfd\" (UID: \"51d8a630-8f89-44aa-9f24-2f1b279cccfd\") " pod="swift-kuttl-tests/root-account-create-update-pjmfd" Jan 31 09:13:49 crc kubenswrapper[4732]: I0131 09:13:49.737075 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-pjmfd" Jan 31 09:13:50 crc kubenswrapper[4732]: I0131 09:13:50.161020 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-pjmfd"] Jan 31 09:13:50 crc kubenswrapper[4732]: W0131 09:13:50.170282 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51d8a630_8f89_44aa_9f24_2f1b279cccfd.slice/crio-5501f4bdd15cc3ad3ae8e942228bb8dad6ce22eba0b56a8c59b70902b46cc1f9 WatchSource:0}: Error finding container 5501f4bdd15cc3ad3ae8e942228bb8dad6ce22eba0b56a8c59b70902b46cc1f9: Status 404 returned error can't find the container with id 5501f4bdd15cc3ad3ae8e942228bb8dad6ce22eba0b56a8c59b70902b46cc1f9 Jan 31 09:13:50 crc kubenswrapper[4732]: I0131 09:13:50.641047 4732 generic.go:334] "Generic (PLEG): container finished" podID="51d8a630-8f89-44aa-9f24-2f1b279cccfd" containerID="9e931ecd0e73f528e7ed41d0a730c73d763f5b49adf745bec9e51e72d6012d62" exitCode=0 Jan 31 09:13:50 crc kubenswrapper[4732]: I0131 09:13:50.641933 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-pjmfd" event={"ID":"51d8a630-8f89-44aa-9f24-2f1b279cccfd","Type":"ContainerDied","Data":"9e931ecd0e73f528e7ed41d0a730c73d763f5b49adf745bec9e51e72d6012d62"} Jan 31 09:13:50 crc kubenswrapper[4732]: I0131 09:13:50.641966 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-pjmfd" event={"ID":"51d8a630-8f89-44aa-9f24-2f1b279cccfd","Type":"ContainerStarted","Data":"5501f4bdd15cc3ad3ae8e942228bb8dad6ce22eba0b56a8c59b70902b46cc1f9"} Jan 31 09:13:51 crc kubenswrapper[4732]: I0131 09:13:51.928381 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-pjmfd" Jan 31 09:13:52 crc kubenswrapper[4732]: I0131 09:13:52.042855 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d8a630-8f89-44aa-9f24-2f1b279cccfd-operator-scripts\") pod \"51d8a630-8f89-44aa-9f24-2f1b279cccfd\" (UID: \"51d8a630-8f89-44aa-9f24-2f1b279cccfd\") " Jan 31 09:13:52 crc kubenswrapper[4732]: I0131 09:13:52.042950 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s867q\" (UniqueName: \"kubernetes.io/projected/51d8a630-8f89-44aa-9f24-2f1b279cccfd-kube-api-access-s867q\") pod \"51d8a630-8f89-44aa-9f24-2f1b279cccfd\" (UID: \"51d8a630-8f89-44aa-9f24-2f1b279cccfd\") " Jan 31 09:13:52 crc kubenswrapper[4732]: I0131 09:13:52.043577 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51d8a630-8f89-44aa-9f24-2f1b279cccfd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51d8a630-8f89-44aa-9f24-2f1b279cccfd" (UID: "51d8a630-8f89-44aa-9f24-2f1b279cccfd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:13:52 crc kubenswrapper[4732]: I0131 09:13:52.049824 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d8a630-8f89-44aa-9f24-2f1b279cccfd-kube-api-access-s867q" (OuterVolumeSpecName: "kube-api-access-s867q") pod "51d8a630-8f89-44aa-9f24-2f1b279cccfd" (UID: "51d8a630-8f89-44aa-9f24-2f1b279cccfd"). InnerVolumeSpecName "kube-api-access-s867q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:13:52 crc kubenswrapper[4732]: I0131 09:13:52.144665 4732 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d8a630-8f89-44aa-9f24-2f1b279cccfd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:13:52 crc kubenswrapper[4732]: I0131 09:13:52.144736 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s867q\" (UniqueName: \"kubernetes.io/projected/51d8a630-8f89-44aa-9f24-2f1b279cccfd-kube-api-access-s867q\") on node \"crc\" DevicePath \"\"" Jan 31 09:13:52 crc kubenswrapper[4732]: I0131 09:13:52.656143 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-pjmfd" event={"ID":"51d8a630-8f89-44aa-9f24-2f1b279cccfd","Type":"ContainerDied","Data":"5501f4bdd15cc3ad3ae8e942228bb8dad6ce22eba0b56a8c59b70902b46cc1f9"} Jan 31 09:13:52 crc kubenswrapper[4732]: I0131 09:13:52.656184 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5501f4bdd15cc3ad3ae8e942228bb8dad6ce22eba0b56a8c59b70902b46cc1f9" Jan 31 09:13:52 crc kubenswrapper[4732]: I0131 09:13:52.656548 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-pjmfd" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.305057 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl"] Jan 31 09:13:56 crc kubenswrapper[4732]: E0131 09:13:56.305922 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d8a630-8f89-44aa-9f24-2f1b279cccfd" containerName="mariadb-account-create-update" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.305941 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d8a630-8f89-44aa-9f24-2f1b279cccfd" containerName="mariadb-account-create-update" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.306075 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d8a630-8f89-44aa-9f24-2f1b279cccfd" containerName="mariadb-account-create-update" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.307144 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.309787 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tnztr" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.315265 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl"] Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.398344 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86012593-15ec-4f3c-aaa4-c0522a918019-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl\" (UID: \"86012593-15ec-4f3c-aaa4-c0522a918019\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.398416 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86012593-15ec-4f3c-aaa4-c0522a918019-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl\" (UID: \"86012593-15ec-4f3c-aaa4-c0522a918019\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.398450 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq98m\" (UniqueName: \"kubernetes.io/projected/86012593-15ec-4f3c-aaa4-c0522a918019-kube-api-access-bq98m\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl\" (UID: \"86012593-15ec-4f3c-aaa4-c0522a918019\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.500208 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq98m\" (UniqueName: \"kubernetes.io/projected/86012593-15ec-4f3c-aaa4-c0522a918019-kube-api-access-bq98m\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl\" (UID: \"86012593-15ec-4f3c-aaa4-c0522a918019\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.500347 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86012593-15ec-4f3c-aaa4-c0522a918019-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl\" (UID: \"86012593-15ec-4f3c-aaa4-c0522a918019\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.500885 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86012593-15ec-4f3c-aaa4-c0522a918019-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl\" (UID: \"86012593-15ec-4f3c-aaa4-c0522a918019\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.500969 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86012593-15ec-4f3c-aaa4-c0522a918019-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl\" (UID: \"86012593-15ec-4f3c-aaa4-c0522a918019\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.501256 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86012593-15ec-4f3c-aaa4-c0522a918019-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl\" (UID: \"86012593-15ec-4f3c-aaa4-c0522a918019\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.523773 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq98m\" (UniqueName: \"kubernetes.io/projected/86012593-15ec-4f3c-aaa4-c0522a918019-kube-api-access-bq98m\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl\" (UID: \"86012593-15ec-4f3c-aaa4-c0522a918019\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" Jan 31 09:13:56 crc kubenswrapper[4732]: I0131 09:13:56.633422 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" Jan 31 09:13:57 crc kubenswrapper[4732]: I0131 09:13:57.121227 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl"] Jan 31 09:13:57 crc kubenswrapper[4732]: W0131 09:13:57.139161 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86012593_15ec_4f3c_aaa4_c0522a918019.slice/crio-30258f3e489223f658f207a8ef31d72f78818b3b43d7efd1cb289a60181d3e4c WatchSource:0}: Error finding container 30258f3e489223f658f207a8ef31d72f78818b3b43d7efd1cb289a60181d3e4c: Status 404 returned error can't find the container with id 30258f3e489223f658f207a8ef31d72f78818b3b43d7efd1cb289a60181d3e4c Jan 31 09:13:57 crc kubenswrapper[4732]: I0131 09:13:57.696591 4732 generic.go:334] "Generic (PLEG): container finished" podID="86012593-15ec-4f3c-aaa4-c0522a918019" containerID="5797783d92fce2d40396a5e635eb5cba9fcbde2d8555d22fc23d15fbbb6337bb" exitCode=0 Jan 31 09:13:57 crc kubenswrapper[4732]: I0131 09:13:57.696638 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" event={"ID":"86012593-15ec-4f3c-aaa4-c0522a918019","Type":"ContainerDied","Data":"5797783d92fce2d40396a5e635eb5cba9fcbde2d8555d22fc23d15fbbb6337bb"} Jan 31 09:13:57 crc kubenswrapper[4732]: I0131 09:13:57.696698 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" event={"ID":"86012593-15ec-4f3c-aaa4-c0522a918019","Type":"ContainerStarted","Data":"30258f3e489223f658f207a8ef31d72f78818b3b43d7efd1cb289a60181d3e4c"} Jan 31 09:13:58 crc kubenswrapper[4732]: I0131 09:13:58.708179 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" event={"ID":"86012593-15ec-4f3c-aaa4-c0522a918019","Type":"ContainerStarted","Data":"dddea144384fd97086387399ccac7f5e8ab364f8aa2e9d3b1d93f34363e9cf4d"} Jan 31 09:13:59 crc kubenswrapper[4732]: I0131 09:13:59.128872 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:59 crc kubenswrapper[4732]: I0131 09:13:59.207540 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:13:59 crc kubenswrapper[4732]: I0131 09:13:59.720803 4732 generic.go:334] "Generic (PLEG): container finished" podID="86012593-15ec-4f3c-aaa4-c0522a918019" containerID="dddea144384fd97086387399ccac7f5e8ab364f8aa2e9d3b1d93f34363e9cf4d" exitCode=0 Jan 31 09:13:59 crc kubenswrapper[4732]: I0131 09:13:59.720853 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" event={"ID":"86012593-15ec-4f3c-aaa4-c0522a918019","Type":"ContainerDied","Data":"dddea144384fd97086387399ccac7f5e8ab364f8aa2e9d3b1d93f34363e9cf4d"} Jan 31 09:14:00 crc kubenswrapper[4732]: I0131 09:14:00.728356 4732 generic.go:334] "Generic (PLEG): container finished" podID="86012593-15ec-4f3c-aaa4-c0522a918019" containerID="87f7881198e2bf8721939abbe73ccd62c3cbc8681018136ac024bf795609a719" exitCode=0 Jan 31 09:14:00 crc kubenswrapper[4732]: I0131 09:14:00.728586 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" event={"ID":"86012593-15ec-4f3c-aaa4-c0522a918019","Type":"ContainerDied","Data":"87f7881198e2bf8721939abbe73ccd62c3cbc8681018136ac024bf795609a719"} Jan 31 09:14:02 crc kubenswrapper[4732]: I0131 09:14:02.097020 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" Jan 31 09:14:02 crc kubenswrapper[4732]: I0131 09:14:02.181484 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq98m\" (UniqueName: \"kubernetes.io/projected/86012593-15ec-4f3c-aaa4-c0522a918019-kube-api-access-bq98m\") pod \"86012593-15ec-4f3c-aaa4-c0522a918019\" (UID: \"86012593-15ec-4f3c-aaa4-c0522a918019\") " Jan 31 09:14:02 crc kubenswrapper[4732]: I0131 09:14:02.181545 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86012593-15ec-4f3c-aaa4-c0522a918019-bundle\") pod \"86012593-15ec-4f3c-aaa4-c0522a918019\" (UID: \"86012593-15ec-4f3c-aaa4-c0522a918019\") " Jan 31 09:14:02 crc kubenswrapper[4732]: I0131 09:14:02.181576 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86012593-15ec-4f3c-aaa4-c0522a918019-util\") pod \"86012593-15ec-4f3c-aaa4-c0522a918019\" (UID: \"86012593-15ec-4f3c-aaa4-c0522a918019\") " Jan 31 09:14:02 crc kubenswrapper[4732]: I0131 09:14:02.183157 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86012593-15ec-4f3c-aaa4-c0522a918019-bundle" (OuterVolumeSpecName: "bundle") pod "86012593-15ec-4f3c-aaa4-c0522a918019" (UID: "86012593-15ec-4f3c-aaa4-c0522a918019"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:14:02 crc kubenswrapper[4732]: I0131 09:14:02.189950 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86012593-15ec-4f3c-aaa4-c0522a918019-kube-api-access-bq98m" (OuterVolumeSpecName: "kube-api-access-bq98m") pod "86012593-15ec-4f3c-aaa4-c0522a918019" (UID: "86012593-15ec-4f3c-aaa4-c0522a918019"). InnerVolumeSpecName "kube-api-access-bq98m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:14:02 crc kubenswrapper[4732]: I0131 09:14:02.194140 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86012593-15ec-4f3c-aaa4-c0522a918019-util" (OuterVolumeSpecName: "util") pod "86012593-15ec-4f3c-aaa4-c0522a918019" (UID: "86012593-15ec-4f3c-aaa4-c0522a918019"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:14:02 crc kubenswrapper[4732]: I0131 09:14:02.282976 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq98m\" (UniqueName: \"kubernetes.io/projected/86012593-15ec-4f3c-aaa4-c0522a918019-kube-api-access-bq98m\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:02 crc kubenswrapper[4732]: I0131 09:14:02.283016 4732 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86012593-15ec-4f3c-aaa4-c0522a918019-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:02 crc kubenswrapper[4732]: I0131 09:14:02.283030 4732 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86012593-15ec-4f3c-aaa4-c0522a918019-util\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:02 crc kubenswrapper[4732]: I0131 09:14:02.746781 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" event={"ID":"86012593-15ec-4f3c-aaa4-c0522a918019","Type":"ContainerDied","Data":"30258f3e489223f658f207a8ef31d72f78818b3b43d7efd1cb289a60181d3e4c"} Jan 31 09:14:02 crc kubenswrapper[4732]: I0131 09:14:02.747188 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30258f3e489223f658f207a8ef31d72f78818b3b43d7efd1cb289a60181d3e4c" Jan 31 09:14:02 crc kubenswrapper[4732]: I0131 09:14:02.746947 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl" Jan 31 09:14:03 crc kubenswrapper[4732]: I0131 09:14:03.226562 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:14:03 crc kubenswrapper[4732]: I0131 09:14:03.286609 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:14:09 crc kubenswrapper[4732]: I0131 09:14:09.196830 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954"] Jan 31 09:14:09 crc kubenswrapper[4732]: E0131 09:14:09.197440 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86012593-15ec-4f3c-aaa4-c0522a918019" containerName="util" Jan 31 09:14:09 crc kubenswrapper[4732]: I0131 09:14:09.197452 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="86012593-15ec-4f3c-aaa4-c0522a918019" containerName="util" Jan 31 09:14:09 crc kubenswrapper[4732]: E0131 09:14:09.197469 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86012593-15ec-4f3c-aaa4-c0522a918019" containerName="extract" Jan 31 09:14:09 crc kubenswrapper[4732]: I0131 09:14:09.197475 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="86012593-15ec-4f3c-aaa4-c0522a918019" containerName="extract" Jan 31 09:14:09 crc kubenswrapper[4732]: E0131 09:14:09.197488 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86012593-15ec-4f3c-aaa4-c0522a918019" containerName="pull" Jan 31 09:14:09 crc kubenswrapper[4732]: I0131 09:14:09.197495 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="86012593-15ec-4f3c-aaa4-c0522a918019" containerName="pull" Jan 31 09:14:09 crc kubenswrapper[4732]: I0131 09:14:09.197597 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="86012593-15ec-4f3c-aaa4-c0522a918019" containerName="extract" Jan 31 09:14:09 crc kubenswrapper[4732]: I0131 09:14:09.198028 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954" Jan 31 09:14:09 crc kubenswrapper[4732]: I0131 09:14:09.200287 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-6ch9n" Jan 31 09:14:09 crc kubenswrapper[4732]: I0131 09:14:09.241537 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954"] Jan 31 09:14:09 crc kubenswrapper[4732]: I0131 09:14:09.273861 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztcb8\" (UniqueName: \"kubernetes.io/projected/79621d02-e834-4725-8b80-d0444f3b6487-kube-api-access-ztcb8\") pod \"rabbitmq-cluster-operator-779fc9694b-2t954\" (UID: \"79621d02-e834-4725-8b80-d0444f3b6487\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954" Jan 31 09:14:09 crc kubenswrapper[4732]: I0131 09:14:09.375813 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztcb8\" (UniqueName: \"kubernetes.io/projected/79621d02-e834-4725-8b80-d0444f3b6487-kube-api-access-ztcb8\") pod \"rabbitmq-cluster-operator-779fc9694b-2t954\" (UID: \"79621d02-e834-4725-8b80-d0444f3b6487\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954" Jan 31 09:14:09 crc kubenswrapper[4732]: I0131 09:14:09.408269 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztcb8\" (UniqueName: \"kubernetes.io/projected/79621d02-e834-4725-8b80-d0444f3b6487-kube-api-access-ztcb8\") pod \"rabbitmq-cluster-operator-779fc9694b-2t954\" (UID: \"79621d02-e834-4725-8b80-d0444f3b6487\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954" Jan 31 09:14:09 crc kubenswrapper[4732]: I0131 09:14:09.518467 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954" Jan 31 09:14:09 crc kubenswrapper[4732]: I0131 09:14:09.978694 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954"] Jan 31 09:14:10 crc kubenswrapper[4732]: I0131 09:14:10.797397 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954" event={"ID":"79621d02-e834-4725-8b80-d0444f3b6487","Type":"ContainerStarted","Data":"2cb30d5ff86683ad564f64402e0e4a2144f56764496f3cb2ead909cfbd0f5de4"} Jan 31 09:14:13 crc kubenswrapper[4732]: I0131 09:14:13.820881 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954" event={"ID":"79621d02-e834-4725-8b80-d0444f3b6487","Type":"ContainerStarted","Data":"288dedb19f0678112de5555bcdb82232ae9f8593518e9e520e8756a6802abe9b"} Jan 31 09:14:13 crc kubenswrapper[4732]: I0131 09:14:13.844377 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954" podStartSLOduration=2.072218535 podStartE2EDuration="4.844362088s" podCreationTimestamp="2026-01-31 09:14:09 +0000 UTC" firstStartedPulling="2026-01-31 09:14:09.99436349 +0000 UTC m=+788.300239704" lastFinishedPulling="2026-01-31 09:14:12.766507033 +0000 UTC m=+791.072383257" observedRunningTime="2026-01-31 09:14:13.843930655 +0000 UTC m=+792.149806859" watchObservedRunningTime="2026-01-31 09:14:13.844362088 +0000 UTC m=+792.150238292" Jan 31 09:14:17 crc kubenswrapper[4732]: I0131 09:14:17.498009 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:14:17 crc kubenswrapper[4732]: I0131 09:14:17.498515 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:14:17 crc kubenswrapper[4732]: I0131 09:14:17.498602 4732 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:14:17 crc kubenswrapper[4732]: I0131 09:14:17.499470 4732 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8d3fd1eb561cfd678a2a0df1de54d984c12dc8e05f74e816b693d4b18b74a20"} pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:14:17 crc kubenswrapper[4732]: I0131 09:14:17.499592 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" containerID="cri-o://e8d3fd1eb561cfd678a2a0df1de54d984c12dc8e05f74e816b693d4b18b74a20" gracePeriod=600 Jan 31 09:14:17 crc kubenswrapper[4732]: I0131 09:14:17.847052 4732 generic.go:334] "Generic (PLEG): container finished" podID="7d790207-d357-4b47-87bf-5b505e061820" containerID="e8d3fd1eb561cfd678a2a0df1de54d984c12dc8e05f74e816b693d4b18b74a20" exitCode=0 Jan 31 09:14:17 crc kubenswrapper[4732]: I0131 09:14:17.847138 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerDied","Data":"e8d3fd1eb561cfd678a2a0df1de54d984c12dc8e05f74e816b693d4b18b74a20"} Jan 31 09:14:17 crc kubenswrapper[4732]: I0131 09:14:17.847383 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerStarted","Data":"7aa4af54c816ede2288939b5eacffccc23edb9afc7e2a36ef42fe01d52b4ae91"} Jan 31 09:14:17 crc kubenswrapper[4732]: I0131 09:14:17.847413 4732 scope.go:117] "RemoveContainer" containerID="942a11834ff55816d19ec94b72706370701e25dcee37029bb97b73b2e3078f9b" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.402129 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.404060 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.407445 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-plugins-conf" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.407491 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-server-dockercfg-x8pqd" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.407543 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-default-user" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.407645 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-server-conf" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.407894 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.408829 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.518935 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-017be409-2d02-48b0-bb67-3403111bd6b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-017be409-2d02-48b0-bb67-3403111bd6b9\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.518986 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.519004 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.519031 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxmtn\" (UniqueName: \"kubernetes.io/projected/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-kube-api-access-xxmtn\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.519905 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.520419 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.520636 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.520928 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.621936 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.621994 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.622042 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.622121 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-017be409-2d02-48b0-bb67-3403111bd6b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-017be409-2d02-48b0-bb67-3403111bd6b9\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.622158 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.622195 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.622236 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxmtn\" (UniqueName: \"kubernetes.io/projected/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-kube-api-access-xxmtn\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.622259 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.623187 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.623337 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.624321 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.628459 4732 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.628509 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-017be409-2d02-48b0-bb67-3403111bd6b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-017be409-2d02-48b0-bb67-3403111bd6b9\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b8d3f8071f0106d74010c769abe69563bad433a0e8012e7bb99dbaf8960d3d0a/globalmount\"" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.629525 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.629590 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.635599 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.645827 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxmtn\" (UniqueName: \"kubernetes.io/projected/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-kube-api-access-xxmtn\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.654398 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-017be409-2d02-48b0-bb67-3403111bd6b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-017be409-2d02-48b0-bb67-3403111bd6b9\") pod \"rabbitmq-server-0\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.721153 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:14:20 crc kubenswrapper[4732]: I0131 09:14:20.925390 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 31 09:14:20 crc kubenswrapper[4732]: W0131 09:14:20.948041 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc71a61f_ccf2_43bb_aedb_71f2ec9f03bd.slice/crio-c0069b5700dea7f3177d96246d005bd742c680b2404c8813992998fda3388237 WatchSource:0}: Error finding container c0069b5700dea7f3177d96246d005bd742c680b2404c8813992998fda3388237: Status 404 returned error can't find the container with id c0069b5700dea7f3177d96246d005bd742c680b2404c8813992998fda3388237 Jan 31 09:14:21 crc kubenswrapper[4732]: I0131 09:14:21.444273 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-hbpcb"] Jan 31 09:14:21 crc kubenswrapper[4732]: I0131 09:14:21.444997 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-hbpcb" Jan 31 09:14:21 crc kubenswrapper[4732]: I0131 09:14:21.448395 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-lsl78" Jan 31 09:14:21 crc kubenswrapper[4732]: I0131 09:14:21.462705 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-hbpcb"] Jan 31 09:14:21 crc kubenswrapper[4732]: I0131 09:14:21.533594 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnrxr\" (UniqueName: \"kubernetes.io/projected/e617e130-c338-40d1-9a5c-83e925a4e6ed-kube-api-access-cnrxr\") pod \"keystone-operator-index-hbpcb\" (UID: \"e617e130-c338-40d1-9a5c-83e925a4e6ed\") " pod="openstack-operators/keystone-operator-index-hbpcb" Jan 31 09:14:21 crc kubenswrapper[4732]: I0131 09:14:21.634403 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnrxr\" (UniqueName: \"kubernetes.io/projected/e617e130-c338-40d1-9a5c-83e925a4e6ed-kube-api-access-cnrxr\") pod \"keystone-operator-index-hbpcb\" (UID: \"e617e130-c338-40d1-9a5c-83e925a4e6ed\") " pod="openstack-operators/keystone-operator-index-hbpcb" Jan 31 09:14:21 crc kubenswrapper[4732]: I0131 09:14:21.658714 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnrxr\" (UniqueName: \"kubernetes.io/projected/e617e130-c338-40d1-9a5c-83e925a4e6ed-kube-api-access-cnrxr\") pod \"keystone-operator-index-hbpcb\" (UID: \"e617e130-c338-40d1-9a5c-83e925a4e6ed\") " pod="openstack-operators/keystone-operator-index-hbpcb" Jan 31 09:14:21 crc kubenswrapper[4732]: I0131 09:14:21.769748 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-hbpcb" Jan 31 09:14:21 crc kubenswrapper[4732]: I0131 09:14:21.879937 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd","Type":"ContainerStarted","Data":"c0069b5700dea7f3177d96246d005bd742c680b2404c8813992998fda3388237"} Jan 31 09:14:21 crc kubenswrapper[4732]: I0131 09:14:21.971814 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-hbpcb"] Jan 31 09:14:22 crc kubenswrapper[4732]: I0131 09:14:22.901061 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-hbpcb" event={"ID":"e617e130-c338-40d1-9a5c-83e925a4e6ed","Type":"ContainerStarted","Data":"6b9559657e742c2dca3e80a6f30ed1e3ae9bae7e31be4ed1e6ca772141139c64"} Jan 31 09:14:26 crc kubenswrapper[4732]: I0131 09:14:26.930212 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-hbpcb" event={"ID":"e617e130-c338-40d1-9a5c-83e925a4e6ed","Type":"ContainerStarted","Data":"c4ac04f58cff4deda6adf78a5af0c30957b4f3ca033ed38c187c5a81f58e6b7d"} Jan 31 09:14:26 crc kubenswrapper[4732]: I0131 09:14:26.952503 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-hbpcb" podStartSLOduration=2.27136689 podStartE2EDuration="5.952483926s" podCreationTimestamp="2026-01-31 09:14:21 +0000 UTC" firstStartedPulling="2026-01-31 09:14:21.986829917 +0000 UTC m=+800.292706121" lastFinishedPulling="2026-01-31 09:14:25.667946953 +0000 UTC m=+803.973823157" observedRunningTime="2026-01-31 09:14:26.950775924 +0000 UTC m=+805.256652128" watchObservedRunningTime="2026-01-31 09:14:26.952483926 +0000 UTC m=+805.258360140" Jan 31 09:14:27 crc kubenswrapper[4732]: I0131 09:14:27.941058 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd","Type":"ContainerStarted","Data":"cb8d487c03d78ac3a07f4803e072d42f9685a260011084fe0b63e98b41a409fb"} Jan 31 09:14:31 crc kubenswrapper[4732]: I0131 09:14:31.769840 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-hbpcb" Jan 31 09:14:31 crc kubenswrapper[4732]: I0131 09:14:31.770587 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-hbpcb" Jan 31 09:14:31 crc kubenswrapper[4732]: I0131 09:14:31.806802 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-hbpcb" Jan 31 09:14:32 crc kubenswrapper[4732]: I0131 09:14:32.013661 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-hbpcb" Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.287596 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj"] Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.289284 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.292958 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tnztr" Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.303309 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj"] Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.397126 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00bc065b-6932-4b27-bd33-5d8618f8a4f1-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj\" (UID: \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.397458 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00bc065b-6932-4b27-bd33-5d8618f8a4f1-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj\" (UID: \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.397601 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg47l\" (UniqueName: \"kubernetes.io/projected/00bc065b-6932-4b27-bd33-5d8618f8a4f1-kube-api-access-lg47l\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj\" (UID: \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.499278 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00bc065b-6932-4b27-bd33-5d8618f8a4f1-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj\" (UID: \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.499384 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00bc065b-6932-4b27-bd33-5d8618f8a4f1-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj\" (UID: \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.499478 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg47l\" (UniqueName: \"kubernetes.io/projected/00bc065b-6932-4b27-bd33-5d8618f8a4f1-kube-api-access-lg47l\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj\" (UID: \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.500329 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00bc065b-6932-4b27-bd33-5d8618f8a4f1-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj\" (UID: \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.500559 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00bc065b-6932-4b27-bd33-5d8618f8a4f1-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj\" (UID: \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.538182 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg47l\" (UniqueName: \"kubernetes.io/projected/00bc065b-6932-4b27-bd33-5d8618f8a4f1-kube-api-access-lg47l\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj\" (UID: \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" Jan 31 09:14:39 crc kubenswrapper[4732]: I0131 09:14:39.607062 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" Jan 31 09:14:40 crc kubenswrapper[4732]: I0131 09:14:40.107633 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj"] Jan 31 09:14:40 crc kubenswrapper[4732]: W0131 09:14:40.110300 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00bc065b_6932_4b27_bd33_5d8618f8a4f1.slice/crio-39e56caeeaf6133372e6dc2f21b7538a05d6ef95b092a2c87140dea950c8850a WatchSource:0}: Error finding container 39e56caeeaf6133372e6dc2f21b7538a05d6ef95b092a2c87140dea950c8850a: Status 404 returned error can't find the container with id 39e56caeeaf6133372e6dc2f21b7538a05d6ef95b092a2c87140dea950c8850a Jan 31 09:14:41 crc kubenswrapper[4732]: I0131 09:14:41.046747 4732 generic.go:334] "Generic (PLEG): container finished" podID="00bc065b-6932-4b27-bd33-5d8618f8a4f1" containerID="75bdd5fe1a4f1b74d5a4dbcdde8f474e6bc05519374a21ed6a1e8f88735183f3" exitCode=0 Jan 31 09:14:41 crc kubenswrapper[4732]: I0131 09:14:41.046802 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" event={"ID":"00bc065b-6932-4b27-bd33-5d8618f8a4f1","Type":"ContainerDied","Data":"75bdd5fe1a4f1b74d5a4dbcdde8f474e6bc05519374a21ed6a1e8f88735183f3"} Jan 31 09:14:41 crc kubenswrapper[4732]: I0131 09:14:41.047058 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" event={"ID":"00bc065b-6932-4b27-bd33-5d8618f8a4f1","Type":"ContainerStarted","Data":"39e56caeeaf6133372e6dc2f21b7538a05d6ef95b092a2c87140dea950c8850a"} Jan 31 09:14:42 crc kubenswrapper[4732]: I0131 09:14:42.059260 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" event={"ID":"00bc065b-6932-4b27-bd33-5d8618f8a4f1","Type":"ContainerStarted","Data":"a37225baa762efba0da855108c4e3b4942f286a7d0c711d1300d72e14f4f3a67"} Jan 31 09:14:43 crc kubenswrapper[4732]: I0131 09:14:43.069322 4732 generic.go:334] "Generic (PLEG): container finished" podID="00bc065b-6932-4b27-bd33-5d8618f8a4f1" containerID="a37225baa762efba0da855108c4e3b4942f286a7d0c711d1300d72e14f4f3a67" exitCode=0 Jan 31 09:14:43 crc kubenswrapper[4732]: I0131 09:14:43.069396 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" event={"ID":"00bc065b-6932-4b27-bd33-5d8618f8a4f1","Type":"ContainerDied","Data":"a37225baa762efba0da855108c4e3b4942f286a7d0c711d1300d72e14f4f3a67"} Jan 31 09:14:44 crc kubenswrapper[4732]: I0131 09:14:44.080520 4732 generic.go:334] "Generic (PLEG): container finished" podID="00bc065b-6932-4b27-bd33-5d8618f8a4f1" containerID="e8c730305c3c39eb9242a11659fb49c02690f7b2e9c98fd70ae0da288de53e06" exitCode=0 Jan 31 09:14:44 crc kubenswrapper[4732]: I0131 09:14:44.080582 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" event={"ID":"00bc065b-6932-4b27-bd33-5d8618f8a4f1","Type":"ContainerDied","Data":"e8c730305c3c39eb9242a11659fb49c02690f7b2e9c98fd70ae0da288de53e06"} Jan 31 09:14:45 crc kubenswrapper[4732]: I0131 09:14:45.358163 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" Jan 31 09:14:45 crc kubenswrapper[4732]: I0131 09:14:45.485105 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg47l\" (UniqueName: \"kubernetes.io/projected/00bc065b-6932-4b27-bd33-5d8618f8a4f1-kube-api-access-lg47l\") pod \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\" (UID: \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\") " Jan 31 09:14:45 crc kubenswrapper[4732]: I0131 09:14:45.485171 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00bc065b-6932-4b27-bd33-5d8618f8a4f1-bundle\") pod \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\" (UID: \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\") " Jan 31 09:14:45 crc kubenswrapper[4732]: I0131 09:14:45.485286 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00bc065b-6932-4b27-bd33-5d8618f8a4f1-util\") pod \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\" (UID: \"00bc065b-6932-4b27-bd33-5d8618f8a4f1\") " Jan 31 09:14:45 crc kubenswrapper[4732]: I0131 09:14:45.486226 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00bc065b-6932-4b27-bd33-5d8618f8a4f1-bundle" (OuterVolumeSpecName: "bundle") pod "00bc065b-6932-4b27-bd33-5d8618f8a4f1" (UID: "00bc065b-6932-4b27-bd33-5d8618f8a4f1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:14:45 crc kubenswrapper[4732]: I0131 09:14:45.492823 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00bc065b-6932-4b27-bd33-5d8618f8a4f1-kube-api-access-lg47l" (OuterVolumeSpecName: "kube-api-access-lg47l") pod "00bc065b-6932-4b27-bd33-5d8618f8a4f1" (UID: "00bc065b-6932-4b27-bd33-5d8618f8a4f1"). InnerVolumeSpecName "kube-api-access-lg47l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:14:45 crc kubenswrapper[4732]: I0131 09:14:45.499822 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00bc065b-6932-4b27-bd33-5d8618f8a4f1-util" (OuterVolumeSpecName: "util") pod "00bc065b-6932-4b27-bd33-5d8618f8a4f1" (UID: "00bc065b-6932-4b27-bd33-5d8618f8a4f1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:14:45 crc kubenswrapper[4732]: I0131 09:14:45.586988 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg47l\" (UniqueName: \"kubernetes.io/projected/00bc065b-6932-4b27-bd33-5d8618f8a4f1-kube-api-access-lg47l\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:45 crc kubenswrapper[4732]: I0131 09:14:45.587183 4732 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00bc065b-6932-4b27-bd33-5d8618f8a4f1-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:45 crc kubenswrapper[4732]: I0131 09:14:45.587273 4732 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00bc065b-6932-4b27-bd33-5d8618f8a4f1-util\") on node \"crc\" DevicePath \"\"" Jan 31 09:14:46 crc kubenswrapper[4732]: I0131 09:14:46.097019 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" event={"ID":"00bc065b-6932-4b27-bd33-5d8618f8a4f1","Type":"ContainerDied","Data":"39e56caeeaf6133372e6dc2f21b7538a05d6ef95b092a2c87140dea950c8850a"} Jan 31 09:14:46 crc kubenswrapper[4732]: I0131 09:14:46.097060 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e56caeeaf6133372e6dc2f21b7538a05d6ef95b092a2c87140dea950c8850a" Jan 31 09:14:46 crc kubenswrapper[4732]: I0131 09:14:46.097096 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.147988 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr"] Jan 31 09:15:00 crc kubenswrapper[4732]: E0131 09:15:00.148516 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00bc065b-6932-4b27-bd33-5d8618f8a4f1" containerName="util" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.148526 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="00bc065b-6932-4b27-bd33-5d8618f8a4f1" containerName="util" Jan 31 09:15:00 crc kubenswrapper[4732]: E0131 09:15:00.148541 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00bc065b-6932-4b27-bd33-5d8618f8a4f1" containerName="pull" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.148546 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="00bc065b-6932-4b27-bd33-5d8618f8a4f1" containerName="pull" Jan 31 09:15:00 crc kubenswrapper[4732]: E0131 09:15:00.148554 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00bc065b-6932-4b27-bd33-5d8618f8a4f1" containerName="extract" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.148559 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="00bc065b-6932-4b27-bd33-5d8618f8a4f1" containerName="extract" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.148675 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="00bc065b-6932-4b27-bd33-5d8618f8a4f1" containerName="extract" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.149051 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.152326 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.152751 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.165414 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr"] Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.215724 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a93c21ce-b808-4d18-a859-28ff7552d95f-secret-volume\") pod \"collect-profiles-29497515-gkkgr\" (UID: \"a93c21ce-b808-4d18-a859-28ff7552d95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.215799 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg94k\" (UniqueName: \"kubernetes.io/projected/a93c21ce-b808-4d18-a859-28ff7552d95f-kube-api-access-vg94k\") pod \"collect-profiles-29497515-gkkgr\" (UID: \"a93c21ce-b808-4d18-a859-28ff7552d95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.215847 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a93c21ce-b808-4d18-a859-28ff7552d95f-config-volume\") pod \"collect-profiles-29497515-gkkgr\" (UID: \"a93c21ce-b808-4d18-a859-28ff7552d95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.311748 4732 generic.go:334] "Generic (PLEG): container finished" podID="dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" containerID="cb8d487c03d78ac3a07f4803e072d42f9685a260011084fe0b63e98b41a409fb" exitCode=0 Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.311796 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd","Type":"ContainerDied","Data":"cb8d487c03d78ac3a07f4803e072d42f9685a260011084fe0b63e98b41a409fb"} Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.316528 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a93c21ce-b808-4d18-a859-28ff7552d95f-config-volume\") pod \"collect-profiles-29497515-gkkgr\" (UID: \"a93c21ce-b808-4d18-a859-28ff7552d95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.316614 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a93c21ce-b808-4d18-a859-28ff7552d95f-secret-volume\") pod \"collect-profiles-29497515-gkkgr\" (UID: \"a93c21ce-b808-4d18-a859-28ff7552d95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.316633 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg94k\" (UniqueName: \"kubernetes.io/projected/a93c21ce-b808-4d18-a859-28ff7552d95f-kube-api-access-vg94k\") pod \"collect-profiles-29497515-gkkgr\" (UID: \"a93c21ce-b808-4d18-a859-28ff7552d95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.317457 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a93c21ce-b808-4d18-a859-28ff7552d95f-config-volume\") pod \"collect-profiles-29497515-gkkgr\" (UID: \"a93c21ce-b808-4d18-a859-28ff7552d95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.333403 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a93c21ce-b808-4d18-a859-28ff7552d95f-secret-volume\") pod \"collect-profiles-29497515-gkkgr\" (UID: \"a93c21ce-b808-4d18-a859-28ff7552d95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.338013 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg94k\" (UniqueName: \"kubernetes.io/projected/a93c21ce-b808-4d18-a859-28ff7552d95f-kube-api-access-vg94k\") pod \"collect-profiles-29497515-gkkgr\" (UID: \"a93c21ce-b808-4d18-a859-28ff7552d95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.483996 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" Jan 31 09:15:00 crc kubenswrapper[4732]: I0131 09:15:00.912920 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr"] Jan 31 09:15:00 crc kubenswrapper[4732]: W0131 09:15:00.915197 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda93c21ce_b808_4d18_a859_28ff7552d95f.slice/crio-43927a110664fe95c7a0ead398c85a94f46078c8752304bb49ce1c7ab5c6a77d WatchSource:0}: Error finding container 43927a110664fe95c7a0ead398c85a94f46078c8752304bb49ce1c7ab5c6a77d: Status 404 returned error can't find the container with id 43927a110664fe95c7a0ead398c85a94f46078c8752304bb49ce1c7ab5c6a77d Jan 31 09:15:01 crc kubenswrapper[4732]: I0131 09:15:01.320297 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd","Type":"ContainerStarted","Data":"5671dbb66e09093ec1e8398957e4f56f878e7982d9afc91bf35fc2570b73092d"} Jan 31 09:15:01 crc kubenswrapper[4732]: I0131 09:15:01.320569 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:15:01 crc kubenswrapper[4732]: I0131 09:15:01.324233 4732 generic.go:334] "Generic (PLEG): container finished" podID="a93c21ce-b808-4d18-a859-28ff7552d95f" containerID="ff356dbcb2d64501fe1fc7779c8190b93c33e26705936fd0f0080aa9e9d8110a" exitCode=0 Jan 31 09:15:01 crc kubenswrapper[4732]: I0131 09:15:01.324281 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" event={"ID":"a93c21ce-b808-4d18-a859-28ff7552d95f","Type":"ContainerDied","Data":"ff356dbcb2d64501fe1fc7779c8190b93c33e26705936fd0f0080aa9e9d8110a"} Jan 31 09:15:01 crc kubenswrapper[4732]: I0131 09:15:01.324314 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" event={"ID":"a93c21ce-b808-4d18-a859-28ff7552d95f","Type":"ContainerStarted","Data":"43927a110664fe95c7a0ead398c85a94f46078c8752304bb49ce1c7ab5c6a77d"} Jan 31 09:15:01 crc kubenswrapper[4732]: I0131 09:15:01.343303 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/rabbitmq-server-0" podStartSLOduration=37.081265452 podStartE2EDuration="42.343285824s" podCreationTimestamp="2026-01-31 09:14:19 +0000 UTC" firstStartedPulling="2026-01-31 09:14:20.951751724 +0000 UTC m=+799.257627938" lastFinishedPulling="2026-01-31 09:14:26.213772076 +0000 UTC m=+804.519648310" observedRunningTime="2026-01-31 09:15:01.341980024 +0000 UTC m=+839.647856238" watchObservedRunningTime="2026-01-31 09:15:01.343285824 +0000 UTC m=+839.649162038" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.386098 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq"] Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.387046 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.388810 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xn5gt" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.389287 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.400240 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq"] Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.547615 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/241eae26-3908-40e0-af9c-59b54a6ab1a0-apiservice-cert\") pod \"keystone-operator-controller-manager-57cdd4758c-lh9cq\" (UID: \"241eae26-3908-40e0-af9c-59b54a6ab1a0\") " pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.547691 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/241eae26-3908-40e0-af9c-59b54a6ab1a0-webhook-cert\") pod \"keystone-operator-controller-manager-57cdd4758c-lh9cq\" (UID: \"241eae26-3908-40e0-af9c-59b54a6ab1a0\") " pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.547723 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7m6t\" (UniqueName: \"kubernetes.io/projected/241eae26-3908-40e0-af9c-59b54a6ab1a0-kube-api-access-r7m6t\") pod \"keystone-operator-controller-manager-57cdd4758c-lh9cq\" (UID: \"241eae26-3908-40e0-af9c-59b54a6ab1a0\") " pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.648993 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/241eae26-3908-40e0-af9c-59b54a6ab1a0-apiservice-cert\") pod \"keystone-operator-controller-manager-57cdd4758c-lh9cq\" (UID: \"241eae26-3908-40e0-af9c-59b54a6ab1a0\") " pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.649050 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/241eae26-3908-40e0-af9c-59b54a6ab1a0-webhook-cert\") pod \"keystone-operator-controller-manager-57cdd4758c-lh9cq\" (UID: \"241eae26-3908-40e0-af9c-59b54a6ab1a0\") " pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.649076 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7m6t\" (UniqueName: \"kubernetes.io/projected/241eae26-3908-40e0-af9c-59b54a6ab1a0-kube-api-access-r7m6t\") pod \"keystone-operator-controller-manager-57cdd4758c-lh9cq\" (UID: \"241eae26-3908-40e0-af9c-59b54a6ab1a0\") " pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.649849 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.651141 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.667962 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/241eae26-3908-40e0-af9c-59b54a6ab1a0-webhook-cert\") pod \"keystone-operator-controller-manager-57cdd4758c-lh9cq\" (UID: \"241eae26-3908-40e0-af9c-59b54a6ab1a0\") " pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.672044 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7m6t\" (UniqueName: \"kubernetes.io/projected/241eae26-3908-40e0-af9c-59b54a6ab1a0-kube-api-access-r7m6t\") pod \"keystone-operator-controller-manager-57cdd4758c-lh9cq\" (UID: \"241eae26-3908-40e0-af9c-59b54a6ab1a0\") " pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.672736 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/241eae26-3908-40e0-af9c-59b54a6ab1a0-apiservice-cert\") pod \"keystone-operator-controller-manager-57cdd4758c-lh9cq\" (UID: \"241eae26-3908-40e0-af9c-59b54a6ab1a0\") " pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.712342 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xn5gt" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.721041 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.750412 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg94k\" (UniqueName: \"kubernetes.io/projected/a93c21ce-b808-4d18-a859-28ff7552d95f-kube-api-access-vg94k\") pod \"a93c21ce-b808-4d18-a859-28ff7552d95f\" (UID: \"a93c21ce-b808-4d18-a859-28ff7552d95f\") " Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.750491 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a93c21ce-b808-4d18-a859-28ff7552d95f-secret-volume\") pod \"a93c21ce-b808-4d18-a859-28ff7552d95f\" (UID: \"a93c21ce-b808-4d18-a859-28ff7552d95f\") " Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.750604 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a93c21ce-b808-4d18-a859-28ff7552d95f-config-volume\") pod \"a93c21ce-b808-4d18-a859-28ff7552d95f\" (UID: \"a93c21ce-b808-4d18-a859-28ff7552d95f\") " Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.751404 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93c21ce-b808-4d18-a859-28ff7552d95f-config-volume" (OuterVolumeSpecName: "config-volume") pod "a93c21ce-b808-4d18-a859-28ff7552d95f" (UID: "a93c21ce-b808-4d18-a859-28ff7552d95f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.754909 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a93c21ce-b808-4d18-a859-28ff7552d95f-kube-api-access-vg94k" (OuterVolumeSpecName: "kube-api-access-vg94k") pod "a93c21ce-b808-4d18-a859-28ff7552d95f" (UID: "a93c21ce-b808-4d18-a859-28ff7552d95f"). InnerVolumeSpecName "kube-api-access-vg94k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.755529 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93c21ce-b808-4d18-a859-28ff7552d95f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a93c21ce-b808-4d18-a859-28ff7552d95f" (UID: "a93c21ce-b808-4d18-a859-28ff7552d95f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.857792 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg94k\" (UniqueName: \"kubernetes.io/projected/a93c21ce-b808-4d18-a859-28ff7552d95f-kube-api-access-vg94k\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.857835 4732 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a93c21ce-b808-4d18-a859-28ff7552d95f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.857854 4732 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a93c21ce-b808-4d18-a859-28ff7552d95f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:02 crc kubenswrapper[4732]: I0131 09:15:02.916314 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq"] Jan 31 09:15:02 crc kubenswrapper[4732]: W0131 09:15:02.925893 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod241eae26_3908_40e0_af9c_59b54a6ab1a0.slice/crio-0967cf246a03d050c7636d21dd8ba1334433e9343155d38efb7a34b23ed4beb6 WatchSource:0}: Error finding container 0967cf246a03d050c7636d21dd8ba1334433e9343155d38efb7a34b23ed4beb6: Status 404 returned error can't find the container with id 0967cf246a03d050c7636d21dd8ba1334433e9343155d38efb7a34b23ed4beb6 Jan 31 09:15:03 crc kubenswrapper[4732]: I0131 09:15:03.336184 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" event={"ID":"241eae26-3908-40e0-af9c-59b54a6ab1a0","Type":"ContainerStarted","Data":"0967cf246a03d050c7636d21dd8ba1334433e9343155d38efb7a34b23ed4beb6"} Jan 31 09:15:03 crc kubenswrapper[4732]: I0131 09:15:03.337600 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" event={"ID":"a93c21ce-b808-4d18-a859-28ff7552d95f","Type":"ContainerDied","Data":"43927a110664fe95c7a0ead398c85a94f46078c8752304bb49ce1c7ab5c6a77d"} Jan 31 09:15:03 crc kubenswrapper[4732]: I0131 09:15:03.337630 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43927a110664fe95c7a0ead398c85a94f46078c8752304bb49ce1c7ab5c6a77d" Jan 31 09:15:03 crc kubenswrapper[4732]: I0131 09:15:03.337709 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497515-gkkgr" Jan 31 09:15:07 crc kubenswrapper[4732]: I0131 09:15:07.366049 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" event={"ID":"241eae26-3908-40e0-af9c-59b54a6ab1a0","Type":"ContainerStarted","Data":"33284c815fdcf867d64c94279b754de6fc3557337b45dc375a7c56112e2acde1"} Jan 31 09:15:07 crc kubenswrapper[4732]: I0131 09:15:07.367502 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:15:07 crc kubenswrapper[4732]: I0131 09:15:07.388396 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" podStartSLOduration=1.6594749819999999 podStartE2EDuration="5.388375623s" podCreationTimestamp="2026-01-31 09:15:02 +0000 UTC" firstStartedPulling="2026-01-31 09:15:02.929236057 +0000 UTC m=+841.235112261" lastFinishedPulling="2026-01-31 09:15:06.658136698 +0000 UTC m=+844.964012902" observedRunningTime="2026-01-31 09:15:07.38133627 +0000 UTC m=+845.687212474" watchObservedRunningTime="2026-01-31 09:15:07.388375623 +0000 UTC m=+845.694251827" Jan 31 09:15:10 crc kubenswrapper[4732]: I0131 09:15:10.725135 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:15:12 crc kubenswrapper[4732]: I0131 09:15:12.725823 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:15:16 crc kubenswrapper[4732]: I0131 09:15:16.244828 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-index-52lrw"] Jan 31 09:15:16 crc kubenswrapper[4732]: E0131 09:15:16.245328 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93c21ce-b808-4d18-a859-28ff7552d95f" containerName="collect-profiles" Jan 31 09:15:16 crc kubenswrapper[4732]: I0131 09:15:16.245342 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93c21ce-b808-4d18-a859-28ff7552d95f" containerName="collect-profiles" Jan 31 09:15:16 crc kubenswrapper[4732]: I0131 09:15:16.245450 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93c21ce-b808-4d18-a859-28ff7552d95f" containerName="collect-profiles" Jan 31 09:15:16 crc kubenswrapper[4732]: I0131 09:15:16.245863 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-52lrw" Jan 31 09:15:16 crc kubenswrapper[4732]: I0131 09:15:16.247816 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-index-dockercfg-gk956" Jan 31 09:15:16 crc kubenswrapper[4732]: I0131 09:15:16.254717 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-52lrw"] Jan 31 09:15:16 crc kubenswrapper[4732]: I0131 09:15:16.361315 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbdx6\" (UniqueName: \"kubernetes.io/projected/9fa1ac28-f4fc-49a1-8d64-5fa6b4858596-kube-api-access-sbdx6\") pod \"barbican-operator-index-52lrw\" (UID: \"9fa1ac28-f4fc-49a1-8d64-5fa6b4858596\") " pod="openstack-operators/barbican-operator-index-52lrw" Jan 31 09:15:16 crc kubenswrapper[4732]: I0131 09:15:16.463198 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbdx6\" (UniqueName: \"kubernetes.io/projected/9fa1ac28-f4fc-49a1-8d64-5fa6b4858596-kube-api-access-sbdx6\") pod \"barbican-operator-index-52lrw\" (UID: \"9fa1ac28-f4fc-49a1-8d64-5fa6b4858596\") " pod="openstack-operators/barbican-operator-index-52lrw" Jan 31 09:15:16 crc kubenswrapper[4732]: I0131 09:15:16.488531 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbdx6\" (UniqueName: \"kubernetes.io/projected/9fa1ac28-f4fc-49a1-8d64-5fa6b4858596-kube-api-access-sbdx6\") pod \"barbican-operator-index-52lrw\" (UID: \"9fa1ac28-f4fc-49a1-8d64-5fa6b4858596\") " pod="openstack-operators/barbican-operator-index-52lrw" Jan 31 09:15:16 crc kubenswrapper[4732]: I0131 09:15:16.563270 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-52lrw" Jan 31 09:15:16 crc kubenswrapper[4732]: I0131 09:15:16.903337 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-52lrw"] Jan 31 09:15:17 crc kubenswrapper[4732]: I0131 09:15:17.432934 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-52lrw" event={"ID":"9fa1ac28-f4fc-49a1-8d64-5fa6b4858596","Type":"ContainerStarted","Data":"980150e42c0e8a8fe248eee466f92a2ae0628871b51319a868cf69417f0d3e9a"} Jan 31 09:15:18 crc kubenswrapper[4732]: I0131 09:15:18.441529 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-52lrw" event={"ID":"9fa1ac28-f4fc-49a1-8d64-5fa6b4858596","Type":"ContainerStarted","Data":"356156d32df0119af1ede0bf03f74050fd94e05ed5f478849118068dd3204a84"} Jan 31 09:15:18 crc kubenswrapper[4732]: I0131 09:15:18.458228 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-index-52lrw" podStartSLOduration=1.443618227 podStartE2EDuration="2.458206159s" podCreationTimestamp="2026-01-31 09:15:16 +0000 UTC" firstStartedPulling="2026-01-31 09:15:16.907269686 +0000 UTC m=+855.213145890" lastFinishedPulling="2026-01-31 09:15:17.921857628 +0000 UTC m=+856.227733822" observedRunningTime="2026-01-31 09:15:18.454909574 +0000 UTC m=+856.760785798" watchObservedRunningTime="2026-01-31 09:15:18.458206159 +0000 UTC m=+856.764082363" Jan 31 09:15:21 crc kubenswrapper[4732]: I0131 09:15:21.236147 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-index-52lrw"] Jan 31 09:15:21 crc kubenswrapper[4732]: I0131 09:15:21.236700 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/barbican-operator-index-52lrw" podUID="9fa1ac28-f4fc-49a1-8d64-5fa6b4858596" containerName="registry-server" containerID="cri-o://356156d32df0119af1ede0bf03f74050fd94e05ed5f478849118068dd3204a84" gracePeriod=2 Jan 31 09:15:21 crc kubenswrapper[4732]: I0131 09:15:21.464394 4732 generic.go:334] "Generic (PLEG): container finished" podID="9fa1ac28-f4fc-49a1-8d64-5fa6b4858596" containerID="356156d32df0119af1ede0bf03f74050fd94e05ed5f478849118068dd3204a84" exitCode=0 Jan 31 09:15:21 crc kubenswrapper[4732]: I0131 09:15:21.464519 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-52lrw" event={"ID":"9fa1ac28-f4fc-49a1-8d64-5fa6b4858596","Type":"ContainerDied","Data":"356156d32df0119af1ede0bf03f74050fd94e05ed5f478849118068dd3204a84"} Jan 31 09:15:21 crc kubenswrapper[4732]: I0131 09:15:21.947179 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-52lrw" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.046258 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbdx6\" (UniqueName: \"kubernetes.io/projected/9fa1ac28-f4fc-49a1-8d64-5fa6b4858596-kube-api-access-sbdx6\") pod \"9fa1ac28-f4fc-49a1-8d64-5fa6b4858596\" (UID: \"9fa1ac28-f4fc-49a1-8d64-5fa6b4858596\") " Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.056982 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-index-ps2mw"] Jan 31 09:15:22 crc kubenswrapper[4732]: E0131 09:15:22.057371 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa1ac28-f4fc-49a1-8d64-5fa6b4858596" containerName="registry-server" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.057397 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa1ac28-f4fc-49a1-8d64-5fa6b4858596" containerName="registry-server" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.057594 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa1ac28-f4fc-49a1-8d64-5fa6b4858596" containerName="registry-server" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.058201 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-ps2mw" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.061605 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-ps2mw"] Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.063060 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa1ac28-f4fc-49a1-8d64-5fa6b4858596-kube-api-access-sbdx6" (OuterVolumeSpecName: "kube-api-access-sbdx6") pod "9fa1ac28-f4fc-49a1-8d64-5fa6b4858596" (UID: "9fa1ac28-f4fc-49a1-8d64-5fa6b4858596"). InnerVolumeSpecName "kube-api-access-sbdx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.149033 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj79r\" (UniqueName: \"kubernetes.io/projected/a6b3a350-8b41-44a0-a6b4-b957947e1df6-kube-api-access-cj79r\") pod \"barbican-operator-index-ps2mw\" (UID: \"a6b3a350-8b41-44a0-a6b4-b957947e1df6\") " pod="openstack-operators/barbican-operator-index-ps2mw" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.149136 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbdx6\" (UniqueName: \"kubernetes.io/projected/9fa1ac28-f4fc-49a1-8d64-5fa6b4858596-kube-api-access-sbdx6\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.250549 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj79r\" (UniqueName: \"kubernetes.io/projected/a6b3a350-8b41-44a0-a6b4-b957947e1df6-kube-api-access-cj79r\") pod \"barbican-operator-index-ps2mw\" (UID: \"a6b3a350-8b41-44a0-a6b4-b957947e1df6\") " pod="openstack-operators/barbican-operator-index-ps2mw" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.267061 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj79r\" (UniqueName: \"kubernetes.io/projected/a6b3a350-8b41-44a0-a6b4-b957947e1df6-kube-api-access-cj79r\") pod \"barbican-operator-index-ps2mw\" (UID: \"a6b3a350-8b41-44a0-a6b4-b957947e1df6\") " pod="openstack-operators/barbican-operator-index-ps2mw" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.382440 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-ps2mw" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.471729 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-52lrw" event={"ID":"9fa1ac28-f4fc-49a1-8d64-5fa6b4858596","Type":"ContainerDied","Data":"980150e42c0e8a8fe248eee466f92a2ae0628871b51319a868cf69417f0d3e9a"} Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.471784 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-52lrw" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.471811 4732 scope.go:117] "RemoveContainer" containerID="356156d32df0119af1ede0bf03f74050fd94e05ed5f478849118068dd3204a84" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.516833 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-index-52lrw"] Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.519530 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/barbican-operator-index-52lrw"] Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.550375 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa1ac28-f4fc-49a1-8d64-5fa6b4858596" path="/var/lib/kubelet/pods/9fa1ac28-f4fc-49a1-8d64-5fa6b4858596/volumes" Jan 31 09:15:22 crc kubenswrapper[4732]: I0131 09:15:22.817189 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-ps2mw"] Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.250705 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nl4xf"] Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.252499 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.264007 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-utilities\") pod \"redhat-marketplace-nl4xf\" (UID: \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\") " pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.264053 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wrv\" (UniqueName: \"kubernetes.io/projected/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-kube-api-access-r7wrv\") pod \"redhat-marketplace-nl4xf\" (UID: \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\") " pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.264103 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-catalog-content\") pod \"redhat-marketplace-nl4xf\" (UID: \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\") " pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.264797 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nl4xf"] Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.365323 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-utilities\") pod \"redhat-marketplace-nl4xf\" (UID: \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\") " pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.365842 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7wrv\" (UniqueName: \"kubernetes.io/projected/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-kube-api-access-r7wrv\") pod \"redhat-marketplace-nl4xf\" (UID: \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\") " pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.365994 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-catalog-content\") pod \"redhat-marketplace-nl4xf\" (UID: \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\") " pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.366646 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-catalog-content\") pod \"redhat-marketplace-nl4xf\" (UID: \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\") " pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.366735 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-utilities\") pod \"redhat-marketplace-nl4xf\" (UID: \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\") " pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.421728 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7wrv\" (UniqueName: \"kubernetes.io/projected/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-kube-api-access-r7wrv\") pod \"redhat-marketplace-nl4xf\" (UID: \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\") " pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.479444 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-ps2mw" event={"ID":"a6b3a350-8b41-44a0-a6b4-b957947e1df6","Type":"ContainerStarted","Data":"6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192"} Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.479490 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-ps2mw" event={"ID":"a6b3a350-8b41-44a0-a6b4-b957947e1df6","Type":"ContainerStarted","Data":"a2b2ec2462bff27c657ef1956671a25bb4c7e593d5773eb486359c1043f19888"} Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.497394 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-index-ps2mw" podStartSLOduration=1.045962928 podStartE2EDuration="1.49737193s" podCreationTimestamp="2026-01-31 09:15:22 +0000 UTC" firstStartedPulling="2026-01-31 09:15:22.828530078 +0000 UTC m=+861.134406282" lastFinishedPulling="2026-01-31 09:15:23.27993908 +0000 UTC m=+861.585815284" observedRunningTime="2026-01-31 09:15:23.494922493 +0000 UTC m=+861.800798697" watchObservedRunningTime="2026-01-31 09:15:23.49737193 +0000 UTC m=+861.803248124" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.577129 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.771275 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-create-qtddl"] Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.772304 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-qtddl" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.788599 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk"] Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.789436 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.793282 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-db-secret" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.802362 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-qtddl"] Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.814805 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk"] Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.872598 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99jr8\" (UniqueName: \"kubernetes.io/projected/c86b412a-c376-48cd-b724-77e5fb6c9347-kube-api-access-99jr8\") pod \"keystone-db-create-qtddl\" (UID: \"c86b412a-c376-48cd-b724-77e5fb6c9347\") " pod="swift-kuttl-tests/keystone-db-create-qtddl" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.872702 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c86b412a-c376-48cd-b724-77e5fb6c9347-operator-scripts\") pod \"keystone-db-create-qtddl\" (UID: \"c86b412a-c376-48cd-b724-77e5fb6c9347\") " pod="swift-kuttl-tests/keystone-db-create-qtddl" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.872797 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hdrj\" (UniqueName: \"kubernetes.io/projected/92cd61e3-285c-42b8-b382-b8dde5e934b8-kube-api-access-5hdrj\") pod \"keystone-5b86-account-create-update-vh2kk\" (UID: \"92cd61e3-285c-42b8-b382-b8dde5e934b8\") " pod="swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.872865 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92cd61e3-285c-42b8-b382-b8dde5e934b8-operator-scripts\") pod \"keystone-5b86-account-create-update-vh2kk\" (UID: \"92cd61e3-285c-42b8-b382-b8dde5e934b8\") " pod="swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.974197 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99jr8\" (UniqueName: \"kubernetes.io/projected/c86b412a-c376-48cd-b724-77e5fb6c9347-kube-api-access-99jr8\") pod \"keystone-db-create-qtddl\" (UID: \"c86b412a-c376-48cd-b724-77e5fb6c9347\") " pod="swift-kuttl-tests/keystone-db-create-qtddl" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.974274 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c86b412a-c376-48cd-b724-77e5fb6c9347-operator-scripts\") pod \"keystone-db-create-qtddl\" (UID: \"c86b412a-c376-48cd-b724-77e5fb6c9347\") " pod="swift-kuttl-tests/keystone-db-create-qtddl" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.974334 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hdrj\" (UniqueName: \"kubernetes.io/projected/92cd61e3-285c-42b8-b382-b8dde5e934b8-kube-api-access-5hdrj\") pod \"keystone-5b86-account-create-update-vh2kk\" (UID: \"92cd61e3-285c-42b8-b382-b8dde5e934b8\") " pod="swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.974461 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92cd61e3-285c-42b8-b382-b8dde5e934b8-operator-scripts\") pod \"keystone-5b86-account-create-update-vh2kk\" (UID: \"92cd61e3-285c-42b8-b382-b8dde5e934b8\") " pod="swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.975227 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c86b412a-c376-48cd-b724-77e5fb6c9347-operator-scripts\") pod \"keystone-db-create-qtddl\" (UID: \"c86b412a-c376-48cd-b724-77e5fb6c9347\") " pod="swift-kuttl-tests/keystone-db-create-qtddl" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.975265 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92cd61e3-285c-42b8-b382-b8dde5e934b8-operator-scripts\") pod \"keystone-5b86-account-create-update-vh2kk\" (UID: \"92cd61e3-285c-42b8-b382-b8dde5e934b8\") " pod="swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk" Jan 31 09:15:23 crc kubenswrapper[4732]: I0131 09:15:23.996551 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99jr8\" (UniqueName: \"kubernetes.io/projected/c86b412a-c376-48cd-b724-77e5fb6c9347-kube-api-access-99jr8\") pod \"keystone-db-create-qtddl\" (UID: \"c86b412a-c376-48cd-b724-77e5fb6c9347\") " pod="swift-kuttl-tests/keystone-db-create-qtddl" Jan 31 09:15:24 crc kubenswrapper[4732]: I0131 09:15:24.003233 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hdrj\" (UniqueName: \"kubernetes.io/projected/92cd61e3-285c-42b8-b382-b8dde5e934b8-kube-api-access-5hdrj\") pod \"keystone-5b86-account-create-update-vh2kk\" (UID: \"92cd61e3-285c-42b8-b382-b8dde5e934b8\") " pod="swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk" Jan 31 09:15:24 crc kubenswrapper[4732]: I0131 09:15:24.014072 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nl4xf"] Jan 31 09:15:24 crc kubenswrapper[4732]: W0131 09:15:24.023438 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2febcab9_3048_4ea6_bd0d_ce40bf6bcda8.slice/crio-8580e16c6c2470255aa863b68b936832583b54119f231ae9e83c3edb8dea81b0 WatchSource:0}: Error finding container 8580e16c6c2470255aa863b68b936832583b54119f231ae9e83c3edb8dea81b0: Status 404 returned error can't find the container with id 8580e16c6c2470255aa863b68b936832583b54119f231ae9e83c3edb8dea81b0 Jan 31 09:15:24 crc kubenswrapper[4732]: I0131 09:15:24.089573 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-qtddl" Jan 31 09:15:24 crc kubenswrapper[4732]: I0131 09:15:24.104053 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk" Jan 31 09:15:24 crc kubenswrapper[4732]: I0131 09:15:24.487982 4732 generic.go:334] "Generic (PLEG): container finished" podID="2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" containerID="8e30489bcc721c1fa556eeaf5a6475bc2d890769c3c2cf8dbcffc5cc74766ca9" exitCode=0 Jan 31 09:15:24 crc kubenswrapper[4732]: I0131 09:15:24.488235 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nl4xf" event={"ID":"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8","Type":"ContainerDied","Data":"8e30489bcc721c1fa556eeaf5a6475bc2d890769c3c2cf8dbcffc5cc74766ca9"} Jan 31 09:15:24 crc kubenswrapper[4732]: I0131 09:15:24.488636 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nl4xf" event={"ID":"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8","Type":"ContainerStarted","Data":"8580e16c6c2470255aa863b68b936832583b54119f231ae9e83c3edb8dea81b0"} Jan 31 09:15:24 crc kubenswrapper[4732]: I0131 09:15:24.558901 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk"] Jan 31 09:15:24 crc kubenswrapper[4732]: W0131 09:15:24.569846 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92cd61e3_285c_42b8_b382_b8dde5e934b8.slice/crio-6c2b41fe902287f420696f23117cb8b4be4f6600cd6092ed479c58bf94eca507 WatchSource:0}: Error finding container 6c2b41fe902287f420696f23117cb8b4be4f6600cd6092ed479c58bf94eca507: Status 404 returned error can't find the container with id 6c2b41fe902287f420696f23117cb8b4be4f6600cd6092ed479c58bf94eca507 Jan 31 09:15:24 crc kubenswrapper[4732]: I0131 09:15:24.664546 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-qtddl"] Jan 31 09:15:24 crc kubenswrapper[4732]: W0131 09:15:24.669295 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc86b412a_c376_48cd_b724_77e5fb6c9347.slice/crio-09bdbbc06ea85c16156cd5e9709429078749a80e7d7e1398b8779ce739af35fd WatchSource:0}: Error finding container 09bdbbc06ea85c16156cd5e9709429078749a80e7d7e1398b8779ce739af35fd: Status 404 returned error can't find the container with id 09bdbbc06ea85c16156cd5e9709429078749a80e7d7e1398b8779ce739af35fd Jan 31 09:15:25 crc kubenswrapper[4732]: E0131 09:15:25.002540 4732 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92cd61e3_285c_42b8_b382_b8dde5e934b8.slice/crio-c0a354876c3578c412d4a67e39a312dc83add567ea01562dcdd52716d48fc242.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:15:25 crc kubenswrapper[4732]: I0131 09:15:25.496001 4732 generic.go:334] "Generic (PLEG): container finished" podID="c86b412a-c376-48cd-b724-77e5fb6c9347" containerID="47dfd350e39daf4b8f033d97a0ce4f52541999043f126251274978479fb72c51" exitCode=0 Jan 31 09:15:25 crc kubenswrapper[4732]: I0131 09:15:25.496083 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-qtddl" event={"ID":"c86b412a-c376-48cd-b724-77e5fb6c9347","Type":"ContainerDied","Data":"47dfd350e39daf4b8f033d97a0ce4f52541999043f126251274978479fb72c51"} Jan 31 09:15:25 crc kubenswrapper[4732]: I0131 09:15:25.496116 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-qtddl" event={"ID":"c86b412a-c376-48cd-b724-77e5fb6c9347","Type":"ContainerStarted","Data":"09bdbbc06ea85c16156cd5e9709429078749a80e7d7e1398b8779ce739af35fd"} Jan 31 09:15:25 crc kubenswrapper[4732]: I0131 09:15:25.498572 4732 generic.go:334] "Generic (PLEG): container finished" podID="92cd61e3-285c-42b8-b382-b8dde5e934b8" containerID="c0a354876c3578c412d4a67e39a312dc83add567ea01562dcdd52716d48fc242" exitCode=0 Jan 31 09:15:25 crc kubenswrapper[4732]: I0131 09:15:25.498825 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk" event={"ID":"92cd61e3-285c-42b8-b382-b8dde5e934b8","Type":"ContainerDied","Data":"c0a354876c3578c412d4a67e39a312dc83add567ea01562dcdd52716d48fc242"} Jan 31 09:15:25 crc kubenswrapper[4732]: I0131 09:15:25.498932 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk" event={"ID":"92cd61e3-285c-42b8-b382-b8dde5e934b8","Type":"ContainerStarted","Data":"6c2b41fe902287f420696f23117cb8b4be4f6600cd6092ed479c58bf94eca507"} Jan 31 09:15:25 crc kubenswrapper[4732]: I0131 09:15:25.501856 4732 generic.go:334] "Generic (PLEG): container finished" podID="2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" containerID="ba6f3408991a2d77cdc964ae18ce8f88510d43c155703e2ee23d40d6b92c931c" exitCode=0 Jan 31 09:15:25 crc kubenswrapper[4732]: I0131 09:15:25.501888 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nl4xf" event={"ID":"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8","Type":"ContainerDied","Data":"ba6f3408991a2d77cdc964ae18ce8f88510d43c155703e2ee23d40d6b92c931c"} Jan 31 09:15:26 crc kubenswrapper[4732]: I0131 09:15:26.511834 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nl4xf" event={"ID":"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8","Type":"ContainerStarted","Data":"4ce0200273326beff1a4d10a7e6f6fa7515641b6637093b2d0bc9e98a2455805"} Jan 31 09:15:26 crc kubenswrapper[4732]: I0131 09:15:26.531838 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nl4xf" podStartSLOduration=2.122934314 podStartE2EDuration="3.531814931s" podCreationTimestamp="2026-01-31 09:15:23 +0000 UTC" firstStartedPulling="2026-01-31 09:15:24.490271036 +0000 UTC m=+862.796147240" lastFinishedPulling="2026-01-31 09:15:25.899151653 +0000 UTC m=+864.205027857" observedRunningTime="2026-01-31 09:15:26.53116481 +0000 UTC m=+864.837041024" watchObservedRunningTime="2026-01-31 09:15:26.531814931 +0000 UTC m=+864.837691145" Jan 31 09:15:26 crc kubenswrapper[4732]: I0131 09:15:26.863393 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-qtddl" Jan 31 09:15:26 crc kubenswrapper[4732]: I0131 09:15:26.870420 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk" Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.012102 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hdrj\" (UniqueName: \"kubernetes.io/projected/92cd61e3-285c-42b8-b382-b8dde5e934b8-kube-api-access-5hdrj\") pod \"92cd61e3-285c-42b8-b382-b8dde5e934b8\" (UID: \"92cd61e3-285c-42b8-b382-b8dde5e934b8\") " Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.016588 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99jr8\" (UniqueName: \"kubernetes.io/projected/c86b412a-c376-48cd-b724-77e5fb6c9347-kube-api-access-99jr8\") pod \"c86b412a-c376-48cd-b724-77e5fb6c9347\" (UID: \"c86b412a-c376-48cd-b724-77e5fb6c9347\") " Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.016682 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c86b412a-c376-48cd-b724-77e5fb6c9347-operator-scripts\") pod \"c86b412a-c376-48cd-b724-77e5fb6c9347\" (UID: \"c86b412a-c376-48cd-b724-77e5fb6c9347\") " Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.016755 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92cd61e3-285c-42b8-b382-b8dde5e934b8-operator-scripts\") pod \"92cd61e3-285c-42b8-b382-b8dde5e934b8\" (UID: \"92cd61e3-285c-42b8-b382-b8dde5e934b8\") " Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.017493 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c86b412a-c376-48cd-b724-77e5fb6c9347-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c86b412a-c376-48cd-b724-77e5fb6c9347" (UID: "c86b412a-c376-48cd-b724-77e5fb6c9347"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.017524 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92cd61e3-285c-42b8-b382-b8dde5e934b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92cd61e3-285c-42b8-b382-b8dde5e934b8" (UID: "92cd61e3-285c-42b8-b382-b8dde5e934b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.020023 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92cd61e3-285c-42b8-b382-b8dde5e934b8-kube-api-access-5hdrj" (OuterVolumeSpecName: "kube-api-access-5hdrj") pod "92cd61e3-285c-42b8-b382-b8dde5e934b8" (UID: "92cd61e3-285c-42b8-b382-b8dde5e934b8"). InnerVolumeSpecName "kube-api-access-5hdrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.020387 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c86b412a-c376-48cd-b724-77e5fb6c9347-kube-api-access-99jr8" (OuterVolumeSpecName: "kube-api-access-99jr8") pod "c86b412a-c376-48cd-b724-77e5fb6c9347" (UID: "c86b412a-c376-48cd-b724-77e5fb6c9347"). InnerVolumeSpecName "kube-api-access-99jr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.118353 4732 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c86b412a-c376-48cd-b724-77e5fb6c9347-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.118385 4732 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92cd61e3-285c-42b8-b382-b8dde5e934b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.118395 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hdrj\" (UniqueName: \"kubernetes.io/projected/92cd61e3-285c-42b8-b382-b8dde5e934b8-kube-api-access-5hdrj\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.118404 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99jr8\" (UniqueName: \"kubernetes.io/projected/c86b412a-c376-48cd-b724-77e5fb6c9347-kube-api-access-99jr8\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.520708 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-qtddl" event={"ID":"c86b412a-c376-48cd-b724-77e5fb6c9347","Type":"ContainerDied","Data":"09bdbbc06ea85c16156cd5e9709429078749a80e7d7e1398b8779ce739af35fd"} Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.520770 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09bdbbc06ea85c16156cd5e9709429078749a80e7d7e1398b8779ce739af35fd" Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.520735 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-qtddl" Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.524097 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk" event={"ID":"92cd61e3-285c-42b8-b382-b8dde5e934b8","Type":"ContainerDied","Data":"6c2b41fe902287f420696f23117cb8b4be4f6600cd6092ed479c58bf94eca507"} Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.524162 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c2b41fe902287f420696f23117cb8b4be4f6600cd6092ed479c58bf94eca507" Jan 31 09:15:27 crc kubenswrapper[4732]: I0131 09:15:27.524271 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.253213 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kqk59"] Jan 31 09:15:28 crc kubenswrapper[4732]: E0131 09:15:28.254099 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92cd61e3-285c-42b8-b382-b8dde5e934b8" containerName="mariadb-account-create-update" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.254136 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="92cd61e3-285c-42b8-b382-b8dde5e934b8" containerName="mariadb-account-create-update" Jan 31 09:15:28 crc kubenswrapper[4732]: E0131 09:15:28.254162 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86b412a-c376-48cd-b724-77e5fb6c9347" containerName="mariadb-database-create" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.254179 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86b412a-c376-48cd-b724-77e5fb6c9347" containerName="mariadb-database-create" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.254448 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86b412a-c376-48cd-b724-77e5fb6c9347" containerName="mariadb-database-create" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.254479 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="92cd61e3-285c-42b8-b382-b8dde5e934b8" containerName="mariadb-account-create-update" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.256205 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.259545 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kqk59"] Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.337436 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx5q9\" (UniqueName: \"kubernetes.io/projected/33f21797-eef0-4dce-9f1f-d6b77a951924-kube-api-access-rx5q9\") pod \"redhat-operators-kqk59\" (UID: \"33f21797-eef0-4dce-9f1f-d6b77a951924\") " pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.337559 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33f21797-eef0-4dce-9f1f-d6b77a951924-utilities\") pod \"redhat-operators-kqk59\" (UID: \"33f21797-eef0-4dce-9f1f-d6b77a951924\") " pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.337597 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33f21797-eef0-4dce-9f1f-d6b77a951924-catalog-content\") pod \"redhat-operators-kqk59\" (UID: \"33f21797-eef0-4dce-9f1f-d6b77a951924\") " pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.438814 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33f21797-eef0-4dce-9f1f-d6b77a951924-catalog-content\") pod \"redhat-operators-kqk59\" (UID: \"33f21797-eef0-4dce-9f1f-d6b77a951924\") " pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.439212 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx5q9\" (UniqueName: \"kubernetes.io/projected/33f21797-eef0-4dce-9f1f-d6b77a951924-kube-api-access-rx5q9\") pod \"redhat-operators-kqk59\" (UID: \"33f21797-eef0-4dce-9f1f-d6b77a951924\") " pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.439311 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33f21797-eef0-4dce-9f1f-d6b77a951924-utilities\") pod \"redhat-operators-kqk59\" (UID: \"33f21797-eef0-4dce-9f1f-d6b77a951924\") " pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.439752 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33f21797-eef0-4dce-9f1f-d6b77a951924-utilities\") pod \"redhat-operators-kqk59\" (UID: \"33f21797-eef0-4dce-9f1f-d6b77a951924\") " pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.439762 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33f21797-eef0-4dce-9f1f-d6b77a951924-catalog-content\") pod \"redhat-operators-kqk59\" (UID: \"33f21797-eef0-4dce-9f1f-d6b77a951924\") " pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.461223 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx5q9\" (UniqueName: \"kubernetes.io/projected/33f21797-eef0-4dce-9f1f-d6b77a951924-kube-api-access-rx5q9\") pod \"redhat-operators-kqk59\" (UID: \"33f21797-eef0-4dce-9f1f-d6b77a951924\") " pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:28 crc kubenswrapper[4732]: I0131 09:15:28.581218 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.037749 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kqk59"] Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.429574 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-sync-vj5z4"] Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.430563 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-vj5z4" Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.432960 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.433204 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.433300 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.433434 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-2sjhb" Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.443632 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-vj5z4"] Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.454305 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54rwm\" (UniqueName: \"kubernetes.io/projected/1568a5da-d308-4b7e-94b6-99c846371cb8-kube-api-access-54rwm\") pod \"keystone-db-sync-vj5z4\" (UID: \"1568a5da-d308-4b7e-94b6-99c846371cb8\") " pod="swift-kuttl-tests/keystone-db-sync-vj5z4" Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.454390 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1568a5da-d308-4b7e-94b6-99c846371cb8-config-data\") pod \"keystone-db-sync-vj5z4\" (UID: \"1568a5da-d308-4b7e-94b6-99c846371cb8\") " pod="swift-kuttl-tests/keystone-db-sync-vj5z4" Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.541751 4732 generic.go:334] "Generic (PLEG): container finished" podID="33f21797-eef0-4dce-9f1f-d6b77a951924" containerID="cc766b5a0a82cd3ceb39ba7e9061b34b32c68752de623072e40d2f503469764b" exitCode=0 Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.541801 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqk59" event={"ID":"33f21797-eef0-4dce-9f1f-d6b77a951924","Type":"ContainerDied","Data":"cc766b5a0a82cd3ceb39ba7e9061b34b32c68752de623072e40d2f503469764b"} Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.541859 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqk59" event={"ID":"33f21797-eef0-4dce-9f1f-d6b77a951924","Type":"ContainerStarted","Data":"360eee913f089b940a4621b7c35dd80948b4548ebcc255ba1cf6e31c1da2b616"} Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.555994 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54rwm\" (UniqueName: \"kubernetes.io/projected/1568a5da-d308-4b7e-94b6-99c846371cb8-kube-api-access-54rwm\") pod \"keystone-db-sync-vj5z4\" (UID: \"1568a5da-d308-4b7e-94b6-99c846371cb8\") " pod="swift-kuttl-tests/keystone-db-sync-vj5z4" Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.556271 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1568a5da-d308-4b7e-94b6-99c846371cb8-config-data\") pod \"keystone-db-sync-vj5z4\" (UID: \"1568a5da-d308-4b7e-94b6-99c846371cb8\") " pod="swift-kuttl-tests/keystone-db-sync-vj5z4" Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.573444 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54rwm\" (UniqueName: \"kubernetes.io/projected/1568a5da-d308-4b7e-94b6-99c846371cb8-kube-api-access-54rwm\") pod \"keystone-db-sync-vj5z4\" (UID: \"1568a5da-d308-4b7e-94b6-99c846371cb8\") " pod="swift-kuttl-tests/keystone-db-sync-vj5z4" Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.573460 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1568a5da-d308-4b7e-94b6-99c846371cb8-config-data\") pod \"keystone-db-sync-vj5z4\" (UID: \"1568a5da-d308-4b7e-94b6-99c846371cb8\") " pod="swift-kuttl-tests/keystone-db-sync-vj5z4" Jan 31 09:15:29 crc kubenswrapper[4732]: I0131 09:15:29.748565 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-vj5z4" Jan 31 09:15:30 crc kubenswrapper[4732]: I0131 09:15:30.195961 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-vj5z4"] Jan 31 09:15:30 crc kubenswrapper[4732]: W0131 09:15:30.202056 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1568a5da_d308_4b7e_94b6_99c846371cb8.slice/crio-c31583a1029ffae07fced86c20dea255707864cf3d8bba913ecc7e7f43951a8d WatchSource:0}: Error finding container c31583a1029ffae07fced86c20dea255707864cf3d8bba913ecc7e7f43951a8d: Status 404 returned error can't find the container with id c31583a1029ffae07fced86c20dea255707864cf3d8bba913ecc7e7f43951a8d Jan 31 09:15:30 crc kubenswrapper[4732]: I0131 09:15:30.573462 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-vj5z4" event={"ID":"1568a5da-d308-4b7e-94b6-99c846371cb8","Type":"ContainerStarted","Data":"c31583a1029ffae07fced86c20dea255707864cf3d8bba913ecc7e7f43951a8d"} Jan 31 09:15:30 crc kubenswrapper[4732]: I0131 09:15:30.575713 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqk59" event={"ID":"33f21797-eef0-4dce-9f1f-d6b77a951924","Type":"ContainerStarted","Data":"44f6676563cf161634b19b4f4bde8ef7a679461edd8877ff6405d99671b94d1c"} Jan 31 09:15:31 crc kubenswrapper[4732]: I0131 09:15:31.585621 4732 generic.go:334] "Generic (PLEG): container finished" podID="33f21797-eef0-4dce-9f1f-d6b77a951924" containerID="44f6676563cf161634b19b4f4bde8ef7a679461edd8877ff6405d99671b94d1c" exitCode=0 Jan 31 09:15:31 crc kubenswrapper[4732]: I0131 09:15:31.585739 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqk59" event={"ID":"33f21797-eef0-4dce-9f1f-d6b77a951924","Type":"ContainerDied","Data":"44f6676563cf161634b19b4f4bde8ef7a679461edd8877ff6405d99671b94d1c"} Jan 31 09:15:32 crc kubenswrapper[4732]: I0131 09:15:32.382749 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-index-ps2mw" Jan 31 09:15:32 crc kubenswrapper[4732]: I0131 09:15:32.383063 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/barbican-operator-index-ps2mw" Jan 31 09:15:32 crc kubenswrapper[4732]: I0131 09:15:32.415201 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/barbican-operator-index-ps2mw" Jan 31 09:15:32 crc kubenswrapper[4732]: I0131 09:15:32.601599 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqk59" event={"ID":"33f21797-eef0-4dce-9f1f-d6b77a951924","Type":"ContainerStarted","Data":"3dae847f1a2ac1d08ed7c6d9e699dd9911b0eb9ca79ac122de91849bbe5f5bc5"} Jan 31 09:15:32 crc kubenswrapper[4732]: I0131 09:15:32.620993 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kqk59" podStartSLOduration=2.204577098 podStartE2EDuration="4.620978514s" podCreationTimestamp="2026-01-31 09:15:28 +0000 UTC" firstStartedPulling="2026-01-31 09:15:29.543417559 +0000 UTC m=+867.849293763" lastFinishedPulling="2026-01-31 09:15:31.959818975 +0000 UTC m=+870.265695179" observedRunningTime="2026-01-31 09:15:32.616773511 +0000 UTC m=+870.922649705" watchObservedRunningTime="2026-01-31 09:15:32.620978514 +0000 UTC m=+870.926854718" Jan 31 09:15:32 crc kubenswrapper[4732]: I0131 09:15:32.632979 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-index-ps2mw" Jan 31 09:15:33 crc kubenswrapper[4732]: I0131 09:15:33.578200 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:33 crc kubenswrapper[4732]: I0131 09:15:33.578255 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:33 crc kubenswrapper[4732]: I0131 09:15:33.633386 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:33 crc kubenswrapper[4732]: I0131 09:15:33.684602 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.506706 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln"] Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.508229 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.509852 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tnztr" Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.514182 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln"] Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.561087 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a9d87a5-c953-483d-8183-0a0b8d4abac9-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln\" (UID: \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.561135 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c7mb\" (UniqueName: \"kubernetes.io/projected/5a9d87a5-c953-483d-8183-0a0b8d4abac9-kube-api-access-4c7mb\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln\" (UID: \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.561160 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a9d87a5-c953-483d-8183-0a0b8d4abac9-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln\" (UID: \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.662200 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a9d87a5-c953-483d-8183-0a0b8d4abac9-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln\" (UID: \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.662244 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c7mb\" (UniqueName: \"kubernetes.io/projected/5a9d87a5-c953-483d-8183-0a0b8d4abac9-kube-api-access-4c7mb\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln\" (UID: \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.662274 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a9d87a5-c953-483d-8183-0a0b8d4abac9-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln\" (UID: \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.662897 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a9d87a5-c953-483d-8183-0a0b8d4abac9-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln\" (UID: \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.663361 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a9d87a5-c953-483d-8183-0a0b8d4abac9-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln\" (UID: \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.680534 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c7mb\" (UniqueName: \"kubernetes.io/projected/5a9d87a5-c953-483d-8183-0a0b8d4abac9-kube-api-access-4c7mb\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln\" (UID: \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" Jan 31 09:15:36 crc kubenswrapper[4732]: I0131 09:15:36.832477 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" Jan 31 09:15:38 crc kubenswrapper[4732]: I0131 09:15:38.435870 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nl4xf"] Jan 31 09:15:38 crc kubenswrapper[4732]: I0131 09:15:38.436375 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nl4xf" podUID="2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" containerName="registry-server" containerID="cri-o://4ce0200273326beff1a4d10a7e6f6fa7515641b6637093b2d0bc9e98a2455805" gracePeriod=2 Jan 31 09:15:38 crc kubenswrapper[4732]: I0131 09:15:38.581785 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:38 crc kubenswrapper[4732]: I0131 09:15:38.581833 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:38 crc kubenswrapper[4732]: I0131 09:15:38.632729 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:38 crc kubenswrapper[4732]: I0131 09:15:38.709139 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:40 crc kubenswrapper[4732]: I0131 09:15:40.671828 4732 generic.go:334] "Generic (PLEG): container finished" podID="2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" containerID="4ce0200273326beff1a4d10a7e6f6fa7515641b6637093b2d0bc9e98a2455805" exitCode=0 Jan 31 09:15:40 crc kubenswrapper[4732]: I0131 09:15:40.671884 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nl4xf" event={"ID":"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8","Type":"ContainerDied","Data":"4ce0200273326beff1a4d10a7e6f6fa7515641b6637093b2d0bc9e98a2455805"} Jan 31 09:15:43 crc kubenswrapper[4732]: E0131 09:15:43.578322 4732 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4ce0200273326beff1a4d10a7e6f6fa7515641b6637093b2d0bc9e98a2455805 is running failed: container process not found" containerID="4ce0200273326beff1a4d10a7e6f6fa7515641b6637093b2d0bc9e98a2455805" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 09:15:43 crc kubenswrapper[4732]: E0131 09:15:43.579368 4732 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4ce0200273326beff1a4d10a7e6f6fa7515641b6637093b2d0bc9e98a2455805 is running failed: container process not found" containerID="4ce0200273326beff1a4d10a7e6f6fa7515641b6637093b2d0bc9e98a2455805" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 09:15:43 crc kubenswrapper[4732]: E0131 09:15:43.580354 4732 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4ce0200273326beff1a4d10a7e6f6fa7515641b6637093b2d0bc9e98a2455805 is running failed: container process not found" containerID="4ce0200273326beff1a4d10a7e6f6fa7515641b6637093b2d0bc9e98a2455805" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 09:15:43 crc kubenswrapper[4732]: E0131 09:15:43.580385 4732 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4ce0200273326beff1a4d10a7e6f6fa7515641b6637093b2d0bc9e98a2455805 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-nl4xf" podUID="2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" containerName="registry-server" Jan 31 09:15:43 crc kubenswrapper[4732]: I0131 09:15:43.635845 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kqk59"] Jan 31 09:15:43 crc kubenswrapper[4732]: I0131 09:15:43.636071 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kqk59" podUID="33f21797-eef0-4dce-9f1f-d6b77a951924" containerName="registry-server" containerID="cri-o://3dae847f1a2ac1d08ed7c6d9e699dd9911b0eb9ca79ac122de91849bbe5f5bc5" gracePeriod=2 Jan 31 09:15:43 crc kubenswrapper[4732]: E0131 09:15:43.803745 4732 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Jan 31 09:15:43 crc kubenswrapper[4732]: E0131 09:15:43.803928 4732 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54rwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-vj5z4_swift-kuttl-tests(1568a5da-d308-4b7e-94b6-99c846371cb8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 09:15:43 crc kubenswrapper[4732]: E0131 09:15:43.805885 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="swift-kuttl-tests/keystone-db-sync-vj5z4" podUID="1568a5da-d308-4b7e-94b6-99c846371cb8" Jan 31 09:15:43 crc kubenswrapper[4732]: I0131 09:15:43.876700 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:43 crc kubenswrapper[4732]: I0131 09:15:43.962767 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-catalog-content\") pod \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\" (UID: \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\") " Jan 31 09:15:43 crc kubenswrapper[4732]: I0131 09:15:43.962862 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-utilities\") pod \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\" (UID: \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\") " Jan 31 09:15:43 crc kubenswrapper[4732]: I0131 09:15:43.962905 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7wrv\" (UniqueName: \"kubernetes.io/projected/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-kube-api-access-r7wrv\") pod \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\" (UID: \"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8\") " Jan 31 09:15:43 crc kubenswrapper[4732]: I0131 09:15:43.963896 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-utilities" (OuterVolumeSpecName: "utilities") pod "2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" (UID: "2febcab9-3048-4ea6-bd0d-ce40bf6bcda8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:15:43 crc kubenswrapper[4732]: I0131 09:15:43.978687 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-kube-api-access-r7wrv" (OuterVolumeSpecName: "kube-api-access-r7wrv") pod "2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" (UID: "2febcab9-3048-4ea6-bd0d-ce40bf6bcda8"). InnerVolumeSpecName "kube-api-access-r7wrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:15:43 crc kubenswrapper[4732]: I0131 09:15:43.990550 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" (UID: "2febcab9-3048-4ea6-bd0d-ce40bf6bcda8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.051574 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.064690 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.064731 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.064746 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7wrv\" (UniqueName: \"kubernetes.io/projected/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8-kube-api-access-r7wrv\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.165913 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33f21797-eef0-4dce-9f1f-d6b77a951924-catalog-content\") pod \"33f21797-eef0-4dce-9f1f-d6b77a951924\" (UID: \"33f21797-eef0-4dce-9f1f-d6b77a951924\") " Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.166002 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx5q9\" (UniqueName: \"kubernetes.io/projected/33f21797-eef0-4dce-9f1f-d6b77a951924-kube-api-access-rx5q9\") pod \"33f21797-eef0-4dce-9f1f-d6b77a951924\" (UID: \"33f21797-eef0-4dce-9f1f-d6b77a951924\") " Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.166151 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33f21797-eef0-4dce-9f1f-d6b77a951924-utilities\") pod \"33f21797-eef0-4dce-9f1f-d6b77a951924\" (UID: \"33f21797-eef0-4dce-9f1f-d6b77a951924\") " Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.167587 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33f21797-eef0-4dce-9f1f-d6b77a951924-utilities" (OuterVolumeSpecName: "utilities") pod "33f21797-eef0-4dce-9f1f-d6b77a951924" (UID: "33f21797-eef0-4dce-9f1f-d6b77a951924"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.169814 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f21797-eef0-4dce-9f1f-d6b77a951924-kube-api-access-rx5q9" (OuterVolumeSpecName: "kube-api-access-rx5q9") pod "33f21797-eef0-4dce-9f1f-d6b77a951924" (UID: "33f21797-eef0-4dce-9f1f-d6b77a951924"). InnerVolumeSpecName "kube-api-access-rx5q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.211987 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln"] Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.268063 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33f21797-eef0-4dce-9f1f-d6b77a951924-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.268332 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx5q9\" (UniqueName: \"kubernetes.io/projected/33f21797-eef0-4dce-9f1f-d6b77a951924-kube-api-access-rx5q9\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.296137 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33f21797-eef0-4dce-9f1f-d6b77a951924-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33f21797-eef0-4dce-9f1f-d6b77a951924" (UID: "33f21797-eef0-4dce-9f1f-d6b77a951924"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.369912 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33f21797-eef0-4dce-9f1f-d6b77a951924-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.705040 4732 generic.go:334] "Generic (PLEG): container finished" podID="33f21797-eef0-4dce-9f1f-d6b77a951924" containerID="3dae847f1a2ac1d08ed7c6d9e699dd9911b0eb9ca79ac122de91849bbe5f5bc5" exitCode=0 Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.705073 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqk59" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.705122 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqk59" event={"ID":"33f21797-eef0-4dce-9f1f-d6b77a951924","Type":"ContainerDied","Data":"3dae847f1a2ac1d08ed7c6d9e699dd9911b0eb9ca79ac122de91849bbe5f5bc5"} Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.705174 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqk59" event={"ID":"33f21797-eef0-4dce-9f1f-d6b77a951924","Type":"ContainerDied","Data":"360eee913f089b940a4621b7c35dd80948b4548ebcc255ba1cf6e31c1da2b616"} Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.705204 4732 scope.go:117] "RemoveContainer" containerID="3dae847f1a2ac1d08ed7c6d9e699dd9911b0eb9ca79ac122de91849bbe5f5bc5" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.708948 4732 generic.go:334] "Generic (PLEG): container finished" podID="5a9d87a5-c953-483d-8183-0a0b8d4abac9" containerID="443c8d984e9ef1e7b308f390e27b580b627477703c6492f50cd0b13f5a986a0a" exitCode=0 Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.709030 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" event={"ID":"5a9d87a5-c953-483d-8183-0a0b8d4abac9","Type":"ContainerDied","Data":"443c8d984e9ef1e7b308f390e27b580b627477703c6492f50cd0b13f5a986a0a"} Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.709061 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" event={"ID":"5a9d87a5-c953-483d-8183-0a0b8d4abac9","Type":"ContainerStarted","Data":"71758a9cca71bb6fba20a483968dd69a7a009aafd37b86b3a13357deef9131c3"} Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.711094 4732 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.711925 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nl4xf" event={"ID":"2febcab9-3048-4ea6-bd0d-ce40bf6bcda8","Type":"ContainerDied","Data":"8580e16c6c2470255aa863b68b936832583b54119f231ae9e83c3edb8dea81b0"} Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.711970 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nl4xf" Jan 31 09:15:44 crc kubenswrapper[4732]: E0131 09:15:44.712806 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="swift-kuttl-tests/keystone-db-sync-vj5z4" podUID="1568a5da-d308-4b7e-94b6-99c846371cb8" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.737207 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kqk59"] Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.742298 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kqk59"] Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.747515 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nl4xf"] Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.751224 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nl4xf"] Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.753948 4732 scope.go:117] "RemoveContainer" containerID="44f6676563cf161634b19b4f4bde8ef7a679461edd8877ff6405d99671b94d1c" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.802638 4732 scope.go:117] "RemoveContainer" containerID="cc766b5a0a82cd3ceb39ba7e9061b34b32c68752de623072e40d2f503469764b" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.828556 4732 scope.go:117] "RemoveContainer" containerID="3dae847f1a2ac1d08ed7c6d9e699dd9911b0eb9ca79ac122de91849bbe5f5bc5" Jan 31 09:15:44 crc kubenswrapper[4732]: E0131 09:15:44.829001 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dae847f1a2ac1d08ed7c6d9e699dd9911b0eb9ca79ac122de91849bbe5f5bc5\": container with ID starting with 3dae847f1a2ac1d08ed7c6d9e699dd9911b0eb9ca79ac122de91849bbe5f5bc5 not found: ID does not exist" containerID="3dae847f1a2ac1d08ed7c6d9e699dd9911b0eb9ca79ac122de91849bbe5f5bc5" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.829038 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dae847f1a2ac1d08ed7c6d9e699dd9911b0eb9ca79ac122de91849bbe5f5bc5"} err="failed to get container status \"3dae847f1a2ac1d08ed7c6d9e699dd9911b0eb9ca79ac122de91849bbe5f5bc5\": rpc error: code = NotFound desc = could not find container \"3dae847f1a2ac1d08ed7c6d9e699dd9911b0eb9ca79ac122de91849bbe5f5bc5\": container with ID starting with 3dae847f1a2ac1d08ed7c6d9e699dd9911b0eb9ca79ac122de91849bbe5f5bc5 not found: ID does not exist" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.829092 4732 scope.go:117] "RemoveContainer" containerID="44f6676563cf161634b19b4f4bde8ef7a679461edd8877ff6405d99671b94d1c" Jan 31 09:15:44 crc kubenswrapper[4732]: E0131 09:15:44.829370 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f6676563cf161634b19b4f4bde8ef7a679461edd8877ff6405d99671b94d1c\": container with ID starting with 44f6676563cf161634b19b4f4bde8ef7a679461edd8877ff6405d99671b94d1c not found: ID does not exist" containerID="44f6676563cf161634b19b4f4bde8ef7a679461edd8877ff6405d99671b94d1c" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.829435 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f6676563cf161634b19b4f4bde8ef7a679461edd8877ff6405d99671b94d1c"} err="failed to get container status \"44f6676563cf161634b19b4f4bde8ef7a679461edd8877ff6405d99671b94d1c\": rpc error: code = NotFound desc = could not find container \"44f6676563cf161634b19b4f4bde8ef7a679461edd8877ff6405d99671b94d1c\": container with ID starting with 44f6676563cf161634b19b4f4bde8ef7a679461edd8877ff6405d99671b94d1c not found: ID does not exist" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.829454 4732 scope.go:117] "RemoveContainer" containerID="cc766b5a0a82cd3ceb39ba7e9061b34b32c68752de623072e40d2f503469764b" Jan 31 09:15:44 crc kubenswrapper[4732]: E0131 09:15:44.829875 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc766b5a0a82cd3ceb39ba7e9061b34b32c68752de623072e40d2f503469764b\": container with ID starting with cc766b5a0a82cd3ceb39ba7e9061b34b32c68752de623072e40d2f503469764b not found: ID does not exist" containerID="cc766b5a0a82cd3ceb39ba7e9061b34b32c68752de623072e40d2f503469764b" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.829923 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc766b5a0a82cd3ceb39ba7e9061b34b32c68752de623072e40d2f503469764b"} err="failed to get container status \"cc766b5a0a82cd3ceb39ba7e9061b34b32c68752de623072e40d2f503469764b\": rpc error: code = NotFound desc = could not find container \"cc766b5a0a82cd3ceb39ba7e9061b34b32c68752de623072e40d2f503469764b\": container with ID starting with cc766b5a0a82cd3ceb39ba7e9061b34b32c68752de623072e40d2f503469764b not found: ID does not exist" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.829941 4732 scope.go:117] "RemoveContainer" containerID="4ce0200273326beff1a4d10a7e6f6fa7515641b6637093b2d0bc9e98a2455805" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.859751 4732 scope.go:117] "RemoveContainer" containerID="ba6f3408991a2d77cdc964ae18ce8f88510d43c155703e2ee23d40d6b92c931c" Jan 31 09:15:44 crc kubenswrapper[4732]: I0131 09:15:44.877168 4732 scope.go:117] "RemoveContainer" containerID="8e30489bcc721c1fa556eeaf5a6475bc2d890769c3c2cf8dbcffc5cc74766ca9" Jan 31 09:15:45 crc kubenswrapper[4732]: I0131 09:15:45.721966 4732 generic.go:334] "Generic (PLEG): container finished" podID="5a9d87a5-c953-483d-8183-0a0b8d4abac9" containerID="f344ba5284e12746a74007d5adc8aa42e8ee48a5c3588cb47db5117f3c83e9f0" exitCode=0 Jan 31 09:15:45 crc kubenswrapper[4732]: I0131 09:15:45.722011 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" event={"ID":"5a9d87a5-c953-483d-8183-0a0b8d4abac9","Type":"ContainerDied","Data":"f344ba5284e12746a74007d5adc8aa42e8ee48a5c3588cb47db5117f3c83e9f0"} Jan 31 09:15:46 crc kubenswrapper[4732]: I0131 09:15:46.558327 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" path="/var/lib/kubelet/pods/2febcab9-3048-4ea6-bd0d-ce40bf6bcda8/volumes" Jan 31 09:15:46 crc kubenswrapper[4732]: I0131 09:15:46.560287 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f21797-eef0-4dce-9f1f-d6b77a951924" path="/var/lib/kubelet/pods/33f21797-eef0-4dce-9f1f-d6b77a951924/volumes" Jan 31 09:15:46 crc kubenswrapper[4732]: I0131 09:15:46.739197 4732 generic.go:334] "Generic (PLEG): container finished" podID="5a9d87a5-c953-483d-8183-0a0b8d4abac9" containerID="a5bd2e1a25f92dbcf8d8999d3428639a204c1c32490db7803f3573205b07d825" exitCode=0 Jan 31 09:15:46 crc kubenswrapper[4732]: I0131 09:15:46.739237 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" event={"ID":"5a9d87a5-c953-483d-8183-0a0b8d4abac9","Type":"ContainerDied","Data":"a5bd2e1a25f92dbcf8d8999d3428639a204c1c32490db7803f3573205b07d825"} Jan 31 09:15:48 crc kubenswrapper[4732]: I0131 09:15:48.076045 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" Jan 31 09:15:48 crc kubenswrapper[4732]: I0131 09:15:48.124193 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c7mb\" (UniqueName: \"kubernetes.io/projected/5a9d87a5-c953-483d-8183-0a0b8d4abac9-kube-api-access-4c7mb\") pod \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\" (UID: \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\") " Jan 31 09:15:48 crc kubenswrapper[4732]: I0131 09:15:48.124282 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a9d87a5-c953-483d-8183-0a0b8d4abac9-util\") pod \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\" (UID: \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\") " Jan 31 09:15:48 crc kubenswrapper[4732]: I0131 09:15:48.124344 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a9d87a5-c953-483d-8183-0a0b8d4abac9-bundle\") pod \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\" (UID: \"5a9d87a5-c953-483d-8183-0a0b8d4abac9\") " Jan 31 09:15:48 crc kubenswrapper[4732]: I0131 09:15:48.125526 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a9d87a5-c953-483d-8183-0a0b8d4abac9-bundle" (OuterVolumeSpecName: "bundle") pod "5a9d87a5-c953-483d-8183-0a0b8d4abac9" (UID: "5a9d87a5-c953-483d-8183-0a0b8d4abac9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:15:48 crc kubenswrapper[4732]: I0131 09:15:48.134057 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9d87a5-c953-483d-8183-0a0b8d4abac9-kube-api-access-4c7mb" (OuterVolumeSpecName: "kube-api-access-4c7mb") pod "5a9d87a5-c953-483d-8183-0a0b8d4abac9" (UID: "5a9d87a5-c953-483d-8183-0a0b8d4abac9"). InnerVolumeSpecName "kube-api-access-4c7mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:15:48 crc kubenswrapper[4732]: I0131 09:15:48.159863 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a9d87a5-c953-483d-8183-0a0b8d4abac9-util" (OuterVolumeSpecName: "util") pod "5a9d87a5-c953-483d-8183-0a0b8d4abac9" (UID: "5a9d87a5-c953-483d-8183-0a0b8d4abac9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:15:48 crc kubenswrapper[4732]: I0131 09:15:48.225951 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c7mb\" (UniqueName: \"kubernetes.io/projected/5a9d87a5-c953-483d-8183-0a0b8d4abac9-kube-api-access-4c7mb\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:48 crc kubenswrapper[4732]: I0131 09:15:48.226023 4732 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a9d87a5-c953-483d-8183-0a0b8d4abac9-util\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:48 crc kubenswrapper[4732]: I0131 09:15:48.226036 4732 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a9d87a5-c953-483d-8183-0a0b8d4abac9-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:15:48 crc kubenswrapper[4732]: I0131 09:15:48.758850 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" event={"ID":"5a9d87a5-c953-483d-8183-0a0b8d4abac9","Type":"ContainerDied","Data":"71758a9cca71bb6fba20a483968dd69a7a009aafd37b86b3a13357deef9131c3"} Jan 31 09:15:48 crc kubenswrapper[4732]: I0131 09:15:48.759262 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71758a9cca71bb6fba20a483968dd69a7a009aafd37b86b3a13357deef9131c3" Jan 31 09:15:48 crc kubenswrapper[4732]: I0131 09:15:48.758956 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.604268 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c"] Jan 31 09:15:59 crc kubenswrapper[4732]: E0131 09:15:59.604962 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f21797-eef0-4dce-9f1f-d6b77a951924" containerName="registry-server" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.604974 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f21797-eef0-4dce-9f1f-d6b77a951924" containerName="registry-server" Jan 31 09:15:59 crc kubenswrapper[4732]: E0131 09:15:59.604984 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" containerName="registry-server" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.604990 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" containerName="registry-server" Jan 31 09:15:59 crc kubenswrapper[4732]: E0131 09:15:59.605000 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" containerName="extract-content" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.605006 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" containerName="extract-content" Jan 31 09:15:59 crc kubenswrapper[4732]: E0131 09:15:59.605014 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a9d87a5-c953-483d-8183-0a0b8d4abac9" containerName="pull" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.605021 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9d87a5-c953-483d-8183-0a0b8d4abac9" containerName="pull" Jan 31 09:15:59 crc kubenswrapper[4732]: E0131 09:15:59.605029 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a9d87a5-c953-483d-8183-0a0b8d4abac9" containerName="extract" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.605034 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9d87a5-c953-483d-8183-0a0b8d4abac9" containerName="extract" Jan 31 09:15:59 crc kubenswrapper[4732]: E0131 09:15:59.605046 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a9d87a5-c953-483d-8183-0a0b8d4abac9" containerName="util" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.605052 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9d87a5-c953-483d-8183-0a0b8d4abac9" containerName="util" Jan 31 09:15:59 crc kubenswrapper[4732]: E0131 09:15:59.605058 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f21797-eef0-4dce-9f1f-d6b77a951924" containerName="extract-content" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.605065 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f21797-eef0-4dce-9f1f-d6b77a951924" containerName="extract-content" Jan 31 09:15:59 crc kubenswrapper[4732]: E0131 09:15:59.605073 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" containerName="extract-utilities" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.605080 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" containerName="extract-utilities" Jan 31 09:15:59 crc kubenswrapper[4732]: E0131 09:15:59.605087 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f21797-eef0-4dce-9f1f-d6b77a951924" containerName="extract-utilities" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.605092 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f21797-eef0-4dce-9f1f-d6b77a951924" containerName="extract-utilities" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.605194 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2febcab9-3048-4ea6-bd0d-ce40bf6bcda8" containerName="registry-server" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.605205 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a9d87a5-c953-483d-8183-0a0b8d4abac9" containerName="extract" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.605215 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f21797-eef0-4dce-9f1f-d6b77a951924" containerName="registry-server" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.605613 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.608830 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-service-cert" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.609638 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-k8l4b" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.620506 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c"] Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.794639 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-apiservice-cert\") pod \"barbican-operator-controller-manager-665b569d9f-rjs9c\" (UID: \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\") " pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.794877 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwr5w\" (UniqueName: \"kubernetes.io/projected/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-kube-api-access-vwr5w\") pod \"barbican-operator-controller-manager-665b569d9f-rjs9c\" (UID: \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\") " pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.794967 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-webhook-cert\") pod \"barbican-operator-controller-manager-665b569d9f-rjs9c\" (UID: \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\") " pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.841872 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-vj5z4" event={"ID":"1568a5da-d308-4b7e-94b6-99c846371cb8","Type":"ContainerStarted","Data":"c9325e278cfd071d34b0338c159d59fb628048d898065c645b245c3766ab28e7"} Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.859935 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-db-sync-vj5z4" podStartSLOduration=2.086817483 podStartE2EDuration="30.859893374s" podCreationTimestamp="2026-01-31 09:15:29 +0000 UTC" firstStartedPulling="2026-01-31 09:15:30.204511756 +0000 UTC m=+868.510387960" lastFinishedPulling="2026-01-31 09:15:58.977587647 +0000 UTC m=+897.283463851" observedRunningTime="2026-01-31 09:15:59.856998192 +0000 UTC m=+898.162874416" watchObservedRunningTime="2026-01-31 09:15:59.859893374 +0000 UTC m=+898.165769588" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.909700 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-webhook-cert\") pod \"barbican-operator-controller-manager-665b569d9f-rjs9c\" (UID: \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\") " pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.909845 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-apiservice-cert\") pod \"barbican-operator-controller-manager-665b569d9f-rjs9c\" (UID: \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\") " pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.910058 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwr5w\" (UniqueName: \"kubernetes.io/projected/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-kube-api-access-vwr5w\") pod \"barbican-operator-controller-manager-665b569d9f-rjs9c\" (UID: \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\") " pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.916654 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-apiservice-cert\") pod \"barbican-operator-controller-manager-665b569d9f-rjs9c\" (UID: \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\") " pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.921116 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-webhook-cert\") pod \"barbican-operator-controller-manager-665b569d9f-rjs9c\" (UID: \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\") " pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:15:59 crc kubenswrapper[4732]: I0131 09:15:59.932101 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwr5w\" (UniqueName: \"kubernetes.io/projected/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-kube-api-access-vwr5w\") pod \"barbican-operator-controller-manager-665b569d9f-rjs9c\" (UID: \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\") " pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:16:00 crc kubenswrapper[4732]: I0131 09:16:00.223561 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:16:00 crc kubenswrapper[4732]: I0131 09:16:00.675247 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c"] Jan 31 09:16:00 crc kubenswrapper[4732]: I0131 09:16:00.847719 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" event={"ID":"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b","Type":"ContainerStarted","Data":"f54629922c72c3da7a2c23ec9c364e1132ebace0c630660cc9bfe34f06a31d4f"} Jan 31 09:16:03 crc kubenswrapper[4732]: I0131 09:16:03.246745 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6qdc6"] Jan 31 09:16:03 crc kubenswrapper[4732]: I0131 09:16:03.248574 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:03 crc kubenswrapper[4732]: I0131 09:16:03.256324 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qdc6"] Jan 31 09:16:03 crc kubenswrapper[4732]: I0131 09:16:03.380150 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7wmh\" (UniqueName: \"kubernetes.io/projected/5d0de300-63ee-4f40-9750-7eb7d5d10466-kube-api-access-n7wmh\") pod \"certified-operators-6qdc6\" (UID: \"5d0de300-63ee-4f40-9750-7eb7d5d10466\") " pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:03 crc kubenswrapper[4732]: I0131 09:16:03.380228 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d0de300-63ee-4f40-9750-7eb7d5d10466-utilities\") pod \"certified-operators-6qdc6\" (UID: \"5d0de300-63ee-4f40-9750-7eb7d5d10466\") " pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:03 crc kubenswrapper[4732]: I0131 09:16:03.380352 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d0de300-63ee-4f40-9750-7eb7d5d10466-catalog-content\") pod \"certified-operators-6qdc6\" (UID: \"5d0de300-63ee-4f40-9750-7eb7d5d10466\") " pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:03 crc kubenswrapper[4732]: I0131 09:16:03.482253 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7wmh\" (UniqueName: \"kubernetes.io/projected/5d0de300-63ee-4f40-9750-7eb7d5d10466-kube-api-access-n7wmh\") pod \"certified-operators-6qdc6\" (UID: \"5d0de300-63ee-4f40-9750-7eb7d5d10466\") " pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:03 crc kubenswrapper[4732]: I0131 09:16:03.482347 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d0de300-63ee-4f40-9750-7eb7d5d10466-utilities\") pod \"certified-operators-6qdc6\" (UID: \"5d0de300-63ee-4f40-9750-7eb7d5d10466\") " pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:03 crc kubenswrapper[4732]: I0131 09:16:03.482384 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d0de300-63ee-4f40-9750-7eb7d5d10466-catalog-content\") pod \"certified-operators-6qdc6\" (UID: \"5d0de300-63ee-4f40-9750-7eb7d5d10466\") " pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:03 crc kubenswrapper[4732]: I0131 09:16:03.483121 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d0de300-63ee-4f40-9750-7eb7d5d10466-catalog-content\") pod \"certified-operators-6qdc6\" (UID: \"5d0de300-63ee-4f40-9750-7eb7d5d10466\") " pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:03 crc kubenswrapper[4732]: I0131 09:16:03.485791 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d0de300-63ee-4f40-9750-7eb7d5d10466-utilities\") pod \"certified-operators-6qdc6\" (UID: \"5d0de300-63ee-4f40-9750-7eb7d5d10466\") " pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:03 crc kubenswrapper[4732]: I0131 09:16:03.504012 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7wmh\" (UniqueName: \"kubernetes.io/projected/5d0de300-63ee-4f40-9750-7eb7d5d10466-kube-api-access-n7wmh\") pod \"certified-operators-6qdc6\" (UID: \"5d0de300-63ee-4f40-9750-7eb7d5d10466\") " pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:03 crc kubenswrapper[4732]: I0131 09:16:03.569024 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:04 crc kubenswrapper[4732]: I0131 09:16:04.296401 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qdc6"] Jan 31 09:16:04 crc kubenswrapper[4732]: I0131 09:16:04.875999 4732 generic.go:334] "Generic (PLEG): container finished" podID="5d0de300-63ee-4f40-9750-7eb7d5d10466" containerID="1fa43d1eeb8c60e26dee8d78abfd035753a71d50bc43bc0b9b3bff5eca6e2540" exitCode=0 Jan 31 09:16:04 crc kubenswrapper[4732]: I0131 09:16:04.876044 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qdc6" event={"ID":"5d0de300-63ee-4f40-9750-7eb7d5d10466","Type":"ContainerDied","Data":"1fa43d1eeb8c60e26dee8d78abfd035753a71d50bc43bc0b9b3bff5eca6e2540"} Jan 31 09:16:04 crc kubenswrapper[4732]: I0131 09:16:04.876384 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qdc6" event={"ID":"5d0de300-63ee-4f40-9750-7eb7d5d10466","Type":"ContainerStarted","Data":"a14dfd56c323ab432efc86e11648711e57a20ae7c23a10d2f26d38d578ad14dc"} Jan 31 09:16:04 crc kubenswrapper[4732]: I0131 09:16:04.879343 4732 generic.go:334] "Generic (PLEG): container finished" podID="1568a5da-d308-4b7e-94b6-99c846371cb8" containerID="c9325e278cfd071d34b0338c159d59fb628048d898065c645b245c3766ab28e7" exitCode=0 Jan 31 09:16:04 crc kubenswrapper[4732]: I0131 09:16:04.879390 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-vj5z4" event={"ID":"1568a5da-d308-4b7e-94b6-99c846371cb8","Type":"ContainerDied","Data":"c9325e278cfd071d34b0338c159d59fb628048d898065c645b245c3766ab28e7"} Jan 31 09:16:04 crc kubenswrapper[4732]: I0131 09:16:04.881141 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" event={"ID":"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b","Type":"ContainerStarted","Data":"613e26fde16172e6df7958455ae9310666a808f14ae1a1a48a42003388cc5dc1"} Jan 31 09:16:04 crc kubenswrapper[4732]: I0131 09:16:04.881294 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:16:04 crc kubenswrapper[4732]: I0131 09:16:04.930194 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" podStartSLOduration=2.802392515 podStartE2EDuration="5.93017462s" podCreationTimestamp="2026-01-31 09:15:59 +0000 UTC" firstStartedPulling="2026-01-31 09:16:00.708012768 +0000 UTC m=+899.013888972" lastFinishedPulling="2026-01-31 09:16:03.835794873 +0000 UTC m=+902.141671077" observedRunningTime="2026-01-31 09:16:04.926806963 +0000 UTC m=+903.232683167" watchObservedRunningTime="2026-01-31 09:16:04.93017462 +0000 UTC m=+903.236050824" Jan 31 09:16:05 crc kubenswrapper[4732]: I0131 09:16:05.889064 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qdc6" event={"ID":"5d0de300-63ee-4f40-9750-7eb7d5d10466","Type":"ContainerStarted","Data":"a84f6527d8e718a0c39029fc384c436e9dc135176afd05bca86ac5610704533f"} Jan 31 09:16:06 crc kubenswrapper[4732]: I0131 09:16:06.166241 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-vj5z4" Jan 31 09:16:06 crc kubenswrapper[4732]: I0131 09:16:06.219310 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54rwm\" (UniqueName: \"kubernetes.io/projected/1568a5da-d308-4b7e-94b6-99c846371cb8-kube-api-access-54rwm\") pod \"1568a5da-d308-4b7e-94b6-99c846371cb8\" (UID: \"1568a5da-d308-4b7e-94b6-99c846371cb8\") " Jan 31 09:16:06 crc kubenswrapper[4732]: I0131 09:16:06.219400 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1568a5da-d308-4b7e-94b6-99c846371cb8-config-data\") pod \"1568a5da-d308-4b7e-94b6-99c846371cb8\" (UID: \"1568a5da-d308-4b7e-94b6-99c846371cb8\") " Jan 31 09:16:06 crc kubenswrapper[4732]: I0131 09:16:06.226314 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1568a5da-d308-4b7e-94b6-99c846371cb8-kube-api-access-54rwm" (OuterVolumeSpecName: "kube-api-access-54rwm") pod "1568a5da-d308-4b7e-94b6-99c846371cb8" (UID: "1568a5da-d308-4b7e-94b6-99c846371cb8"). InnerVolumeSpecName "kube-api-access-54rwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:16:06 crc kubenswrapper[4732]: I0131 09:16:06.249631 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1568a5da-d308-4b7e-94b6-99c846371cb8-config-data" (OuterVolumeSpecName: "config-data") pod "1568a5da-d308-4b7e-94b6-99c846371cb8" (UID: "1568a5da-d308-4b7e-94b6-99c846371cb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:16:06 crc kubenswrapper[4732]: I0131 09:16:06.321268 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54rwm\" (UniqueName: \"kubernetes.io/projected/1568a5da-d308-4b7e-94b6-99c846371cb8-kube-api-access-54rwm\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:06 crc kubenswrapper[4732]: I0131 09:16:06.321312 4732 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1568a5da-d308-4b7e-94b6-99c846371cb8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:06 crc kubenswrapper[4732]: I0131 09:16:06.901606 4732 generic.go:334] "Generic (PLEG): container finished" podID="5d0de300-63ee-4f40-9750-7eb7d5d10466" containerID="a84f6527d8e718a0c39029fc384c436e9dc135176afd05bca86ac5610704533f" exitCode=0 Jan 31 09:16:06 crc kubenswrapper[4732]: I0131 09:16:06.901691 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qdc6" event={"ID":"5d0de300-63ee-4f40-9750-7eb7d5d10466","Type":"ContainerDied","Data":"a84f6527d8e718a0c39029fc384c436e9dc135176afd05bca86ac5610704533f"} Jan 31 09:16:06 crc kubenswrapper[4732]: I0131 09:16:06.904943 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-vj5z4" event={"ID":"1568a5da-d308-4b7e-94b6-99c846371cb8","Type":"ContainerDied","Data":"c31583a1029ffae07fced86c20dea255707864cf3d8bba913ecc7e7f43951a8d"} Jan 31 09:16:06 crc kubenswrapper[4732]: I0131 09:16:06.904987 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c31583a1029ffae07fced86c20dea255707864cf3d8bba913ecc7e7f43951a8d" Jan 31 09:16:06 crc kubenswrapper[4732]: I0131 09:16:06.905036 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-vj5z4" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.103165 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-r757b"] Jan 31 09:16:07 crc kubenswrapper[4732]: E0131 09:16:07.103701 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1568a5da-d308-4b7e-94b6-99c846371cb8" containerName="keystone-db-sync" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.103717 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1568a5da-d308-4b7e-94b6-99c846371cb8" containerName="keystone-db-sync" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.103834 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1568a5da-d308-4b7e-94b6-99c846371cb8" containerName="keystone-db-sync" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.104248 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.105804 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"osp-secret" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.106358 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.106443 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.106618 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.107190 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-2sjhb" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.113226 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-r757b"] Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.234636 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-fernet-keys\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.234707 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-config-data\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.234732 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7dm2\" (UniqueName: \"kubernetes.io/projected/65a47d3d-88f0-48b4-b672-9b224ead785f-kube-api-access-s7dm2\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.234856 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-scripts\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.234897 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-credential-keys\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.336465 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-scripts\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.336527 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-credential-keys\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.336561 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-fernet-keys\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.336578 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-config-data\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.336601 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7dm2\" (UniqueName: \"kubernetes.io/projected/65a47d3d-88f0-48b4-b672-9b224ead785f-kube-api-access-s7dm2\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.342447 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-credential-keys\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.344138 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-fernet-keys\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.352070 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-scripts\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.352491 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-config-data\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.357322 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7dm2\" (UniqueName: \"kubernetes.io/projected/65a47d3d-88f0-48b4-b672-9b224ead785f-kube-api-access-s7dm2\") pod \"keystone-bootstrap-r757b\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.428339 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.869472 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-r757b"] Jan 31 09:16:07 crc kubenswrapper[4732]: W0131 09:16:07.888949 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65a47d3d_88f0_48b4_b672_9b224ead785f.slice/crio-2972059b4750f41cbe4b42855dfe50fb8a59b3da0d14b490d2a8c37cd4ca4736 WatchSource:0}: Error finding container 2972059b4750f41cbe4b42855dfe50fb8a59b3da0d14b490d2a8c37cd4ca4736: Status 404 returned error can't find the container with id 2972059b4750f41cbe4b42855dfe50fb8a59b3da0d14b490d2a8c37cd4ca4736 Jan 31 09:16:07 crc kubenswrapper[4732]: I0131 09:16:07.947897 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-r757b" event={"ID":"65a47d3d-88f0-48b4-b672-9b224ead785f","Type":"ContainerStarted","Data":"2972059b4750f41cbe4b42855dfe50fb8a59b3da0d14b490d2a8c37cd4ca4736"} Jan 31 09:16:08 crc kubenswrapper[4732]: I0131 09:16:08.959453 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qdc6" event={"ID":"5d0de300-63ee-4f40-9750-7eb7d5d10466","Type":"ContainerStarted","Data":"ee0c5fb66ad6b1e6b4f42d6ec9cbbd6de131ce1fe2d36272c309841e4eaa9b41"} Jan 31 09:16:08 crc kubenswrapper[4732]: I0131 09:16:08.962466 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-r757b" event={"ID":"65a47d3d-88f0-48b4-b672-9b224ead785f","Type":"ContainerStarted","Data":"09dedf44a2244d0aa2f52cb5add6cbbb78a1014d9997dba9dbd36b2416c85acf"} Jan 31 09:16:08 crc kubenswrapper[4732]: I0131 09:16:08.981617 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6qdc6" podStartSLOduration=3.042276297 podStartE2EDuration="5.981602288s" podCreationTimestamp="2026-01-31 09:16:03 +0000 UTC" firstStartedPulling="2026-01-31 09:16:04.877454952 +0000 UTC m=+903.183331166" lastFinishedPulling="2026-01-31 09:16:07.816780953 +0000 UTC m=+906.122657157" observedRunningTime="2026-01-31 09:16:08.978711047 +0000 UTC m=+907.284587261" watchObservedRunningTime="2026-01-31 09:16:08.981602288 +0000 UTC m=+907.287478492" Jan 31 09:16:08 crc kubenswrapper[4732]: I0131 09:16:08.999873 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-bootstrap-r757b" podStartSLOduration=1.999857046 podStartE2EDuration="1.999857046s" podCreationTimestamp="2026-01-31 09:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:16:08.994038921 +0000 UTC m=+907.299915125" watchObservedRunningTime="2026-01-31 09:16:08.999857046 +0000 UTC m=+907.305733250" Jan 31 09:16:10 crc kubenswrapper[4732]: I0131 09:16:10.230703 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:16:11 crc kubenswrapper[4732]: I0131 09:16:11.645336 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b4r9l"] Jan 31 09:16:11 crc kubenswrapper[4732]: I0131 09:16:11.646949 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:11 crc kubenswrapper[4732]: I0131 09:16:11.660520 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4r9l"] Jan 31 09:16:11 crc kubenswrapper[4732]: I0131 09:16:11.724316 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd27r\" (UniqueName: \"kubernetes.io/projected/30d15a7c-e3f9-4280-b0d3-39264c464abe-kube-api-access-dd27r\") pod \"community-operators-b4r9l\" (UID: \"30d15a7c-e3f9-4280-b0d3-39264c464abe\") " pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:11 crc kubenswrapper[4732]: I0131 09:16:11.724443 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d15a7c-e3f9-4280-b0d3-39264c464abe-utilities\") pod \"community-operators-b4r9l\" (UID: \"30d15a7c-e3f9-4280-b0d3-39264c464abe\") " pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:11 crc kubenswrapper[4732]: I0131 09:16:11.724506 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d15a7c-e3f9-4280-b0d3-39264c464abe-catalog-content\") pod \"community-operators-b4r9l\" (UID: \"30d15a7c-e3f9-4280-b0d3-39264c464abe\") " pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:11 crc kubenswrapper[4732]: I0131 09:16:11.826282 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d15a7c-e3f9-4280-b0d3-39264c464abe-utilities\") pod \"community-operators-b4r9l\" (UID: \"30d15a7c-e3f9-4280-b0d3-39264c464abe\") " pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:11 crc kubenswrapper[4732]: I0131 09:16:11.826582 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d15a7c-e3f9-4280-b0d3-39264c464abe-catalog-content\") pod \"community-operators-b4r9l\" (UID: \"30d15a7c-e3f9-4280-b0d3-39264c464abe\") " pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:11 crc kubenswrapper[4732]: I0131 09:16:11.826612 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd27r\" (UniqueName: \"kubernetes.io/projected/30d15a7c-e3f9-4280-b0d3-39264c464abe-kube-api-access-dd27r\") pod \"community-operators-b4r9l\" (UID: \"30d15a7c-e3f9-4280-b0d3-39264c464abe\") " pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:11 crc kubenswrapper[4732]: I0131 09:16:11.826923 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d15a7c-e3f9-4280-b0d3-39264c464abe-utilities\") pod \"community-operators-b4r9l\" (UID: \"30d15a7c-e3f9-4280-b0d3-39264c464abe\") " pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:11 crc kubenswrapper[4732]: I0131 09:16:11.827182 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d15a7c-e3f9-4280-b0d3-39264c464abe-catalog-content\") pod \"community-operators-b4r9l\" (UID: \"30d15a7c-e3f9-4280-b0d3-39264c464abe\") " pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:11 crc kubenswrapper[4732]: I0131 09:16:11.847965 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd27r\" (UniqueName: \"kubernetes.io/projected/30d15a7c-e3f9-4280-b0d3-39264c464abe-kube-api-access-dd27r\") pod \"community-operators-b4r9l\" (UID: \"30d15a7c-e3f9-4280-b0d3-39264c464abe\") " pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:11 crc kubenswrapper[4732]: I0131 09:16:11.967117 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:12 crc kubenswrapper[4732]: I0131 09:16:12.438173 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4r9l"] Jan 31 09:16:12 crc kubenswrapper[4732]: I0131 09:16:12.991641 4732 generic.go:334] "Generic (PLEG): container finished" podID="30d15a7c-e3f9-4280-b0d3-39264c464abe" containerID="cc7bdcfd3c4cf42fbbd1013cc743457dedaafba9028f44513b712048f986520e" exitCode=0 Jan 31 09:16:12 crc kubenswrapper[4732]: I0131 09:16:12.991703 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4r9l" event={"ID":"30d15a7c-e3f9-4280-b0d3-39264c464abe","Type":"ContainerDied","Data":"cc7bdcfd3c4cf42fbbd1013cc743457dedaafba9028f44513b712048f986520e"} Jan 31 09:16:12 crc kubenswrapper[4732]: I0131 09:16:12.991969 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4r9l" event={"ID":"30d15a7c-e3f9-4280-b0d3-39264c464abe","Type":"ContainerStarted","Data":"d41d176201ea47fe3677b834f0031fb82b32fbeaf46af88d3969bcb67324887b"} Jan 31 09:16:13 crc kubenswrapper[4732]: I0131 09:16:13.569632 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:13 crc kubenswrapper[4732]: I0131 09:16:13.569687 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:13 crc kubenswrapper[4732]: I0131 09:16:13.679896 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:14 crc kubenswrapper[4732]: I0131 09:16:13.999851 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4r9l" event={"ID":"30d15a7c-e3f9-4280-b0d3-39264c464abe","Type":"ContainerStarted","Data":"f8fa3bcb0182e184d81f9b95c68a6ba376603e25fdaca53ae4f6c19daf283637"} Jan 31 09:16:14 crc kubenswrapper[4732]: I0131 09:16:14.041266 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.010004 4732 generic.go:334] "Generic (PLEG): container finished" podID="65a47d3d-88f0-48b4-b672-9b224ead785f" containerID="09dedf44a2244d0aa2f52cb5add6cbbb78a1014d9997dba9dbd36b2416c85acf" exitCode=0 Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.010094 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-r757b" event={"ID":"65a47d3d-88f0-48b4-b672-9b224ead785f","Type":"ContainerDied","Data":"09dedf44a2244d0aa2f52cb5add6cbbb78a1014d9997dba9dbd36b2416c85acf"} Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.013790 4732 generic.go:334] "Generic (PLEG): container finished" podID="30d15a7c-e3f9-4280-b0d3-39264c464abe" containerID="f8fa3bcb0182e184d81f9b95c68a6ba376603e25fdaca53ae4f6c19daf283637" exitCode=0 Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.013902 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4r9l" event={"ID":"30d15a7c-e3f9-4280-b0d3-39264c464abe","Type":"ContainerDied","Data":"f8fa3bcb0182e184d81f9b95c68a6ba376603e25fdaca53ae4f6c19daf283637"} Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.835616 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-create-hffnv"] Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.836913 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-hffnv" Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.843850 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-f734-account-create-update-wkhs4"] Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.844592 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.854719 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-db-secret" Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.864167 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-hffnv"] Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.883823 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-f734-account-create-update-wkhs4"] Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.992192 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d54d8c7-230a-4831-97a8-d17aef7fa6eb-operator-scripts\") pod \"barbican-db-create-hffnv\" (UID: \"6d54d8c7-230a-4831-97a8-d17aef7fa6eb\") " pod="swift-kuttl-tests/barbican-db-create-hffnv" Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.992788 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf9e4683-b288-4b28-9b9b-504461c55a4e-operator-scripts\") pod \"barbican-f734-account-create-update-wkhs4\" (UID: \"bf9e4683-b288-4b28-9b9b-504461c55a4e\") " pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.992943 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb9kr\" (UniqueName: \"kubernetes.io/projected/6d54d8c7-230a-4831-97a8-d17aef7fa6eb-kube-api-access-pb9kr\") pod \"barbican-db-create-hffnv\" (UID: \"6d54d8c7-230a-4831-97a8-d17aef7fa6eb\") " pod="swift-kuttl-tests/barbican-db-create-hffnv" Jan 31 09:16:15 crc kubenswrapper[4732]: I0131 09:16:15.992984 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpxkb\" (UniqueName: \"kubernetes.io/projected/bf9e4683-b288-4b28-9b9b-504461c55a4e-kube-api-access-lpxkb\") pod \"barbican-f734-account-create-update-wkhs4\" (UID: \"bf9e4683-b288-4b28-9b9b-504461c55a4e\") " pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.021343 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4r9l" event={"ID":"30d15a7c-e3f9-4280-b0d3-39264c464abe","Type":"ContainerStarted","Data":"aaaf66b396de81920559109560059a25ed5b958b5791039a68fba6ae2b9ed583"} Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.052742 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b4r9l" podStartSLOduration=2.624408368 podStartE2EDuration="5.052714321s" podCreationTimestamp="2026-01-31 09:16:11 +0000 UTC" firstStartedPulling="2026-01-31 09:16:12.992971039 +0000 UTC m=+911.298847243" lastFinishedPulling="2026-01-31 09:16:15.421276992 +0000 UTC m=+913.727153196" observedRunningTime="2026-01-31 09:16:16.047524737 +0000 UTC m=+914.353400941" watchObservedRunningTime="2026-01-31 09:16:16.052714321 +0000 UTC m=+914.358590535" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.094842 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb9kr\" (UniqueName: \"kubernetes.io/projected/6d54d8c7-230a-4831-97a8-d17aef7fa6eb-kube-api-access-pb9kr\") pod \"barbican-db-create-hffnv\" (UID: \"6d54d8c7-230a-4831-97a8-d17aef7fa6eb\") " pod="swift-kuttl-tests/barbican-db-create-hffnv" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.094920 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpxkb\" (UniqueName: \"kubernetes.io/projected/bf9e4683-b288-4b28-9b9b-504461c55a4e-kube-api-access-lpxkb\") pod \"barbican-f734-account-create-update-wkhs4\" (UID: \"bf9e4683-b288-4b28-9b9b-504461c55a4e\") " pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.094987 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d54d8c7-230a-4831-97a8-d17aef7fa6eb-operator-scripts\") pod \"barbican-db-create-hffnv\" (UID: \"6d54d8c7-230a-4831-97a8-d17aef7fa6eb\") " pod="swift-kuttl-tests/barbican-db-create-hffnv" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.095111 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf9e4683-b288-4b28-9b9b-504461c55a4e-operator-scripts\") pod \"barbican-f734-account-create-update-wkhs4\" (UID: \"bf9e4683-b288-4b28-9b9b-504461c55a4e\") " pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.095926 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d54d8c7-230a-4831-97a8-d17aef7fa6eb-operator-scripts\") pod \"barbican-db-create-hffnv\" (UID: \"6d54d8c7-230a-4831-97a8-d17aef7fa6eb\") " pod="swift-kuttl-tests/barbican-db-create-hffnv" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.096053 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf9e4683-b288-4b28-9b9b-504461c55a4e-operator-scripts\") pod \"barbican-f734-account-create-update-wkhs4\" (UID: \"bf9e4683-b288-4b28-9b9b-504461c55a4e\") " pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.115388 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpxkb\" (UniqueName: \"kubernetes.io/projected/bf9e4683-b288-4b28-9b9b-504461c55a4e-kube-api-access-lpxkb\") pod \"barbican-f734-account-create-update-wkhs4\" (UID: \"bf9e4683-b288-4b28-9b9b-504461c55a4e\") " pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.121878 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb9kr\" (UniqueName: \"kubernetes.io/projected/6d54d8c7-230a-4831-97a8-d17aef7fa6eb-kube-api-access-pb9kr\") pod \"barbican-db-create-hffnv\" (UID: \"6d54d8c7-230a-4831-97a8-d17aef7fa6eb\") " pod="swift-kuttl-tests/barbican-db-create-hffnv" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.153809 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-hffnv" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.161972 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.333390 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.506547 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-fernet-keys\") pod \"65a47d3d-88f0-48b4-b672-9b224ead785f\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.506592 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-scripts\") pod \"65a47d3d-88f0-48b4-b672-9b224ead785f\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.506611 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-config-data\") pod \"65a47d3d-88f0-48b4-b672-9b224ead785f\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.506699 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-credential-keys\") pod \"65a47d3d-88f0-48b4-b672-9b224ead785f\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.506735 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7dm2\" (UniqueName: \"kubernetes.io/projected/65a47d3d-88f0-48b4-b672-9b224ead785f-kube-api-access-s7dm2\") pod \"65a47d3d-88f0-48b4-b672-9b224ead785f\" (UID: \"65a47d3d-88f0-48b4-b672-9b224ead785f\") " Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.510991 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a47d3d-88f0-48b4-b672-9b224ead785f-kube-api-access-s7dm2" (OuterVolumeSpecName: "kube-api-access-s7dm2") pod "65a47d3d-88f0-48b4-b672-9b224ead785f" (UID: "65a47d3d-88f0-48b4-b672-9b224ead785f"). InnerVolumeSpecName "kube-api-access-s7dm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.511473 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-scripts" (OuterVolumeSpecName: "scripts") pod "65a47d3d-88f0-48b4-b672-9b224ead785f" (UID: "65a47d3d-88f0-48b4-b672-9b224ead785f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.511590 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "65a47d3d-88f0-48b4-b672-9b224ead785f" (UID: "65a47d3d-88f0-48b4-b672-9b224ead785f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.511780 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "65a47d3d-88f0-48b4-b672-9b224ead785f" (UID: "65a47d3d-88f0-48b4-b672-9b224ead785f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.528258 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-config-data" (OuterVolumeSpecName: "config-data") pod "65a47d3d-88f0-48b4-b672-9b224ead785f" (UID: "65a47d3d-88f0-48b4-b672-9b224ead785f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.608374 4732 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.608422 4732 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.608434 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.608447 4732 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65a47d3d-88f0-48b4-b672-9b224ead785f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.608462 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7dm2\" (UniqueName: \"kubernetes.io/projected/65a47d3d-88f0-48b4-b672-9b224ead785f-kube-api-access-s7dm2\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:16 crc kubenswrapper[4732]: W0131 09:16:16.629962 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d54d8c7_230a_4831_97a8_d17aef7fa6eb.slice/crio-ba499657f2f30c6dcb7e8a9c8ad38f6d7bf22c0624c460b0a91dbca9a6242663 WatchSource:0}: Error finding container ba499657f2f30c6dcb7e8a9c8ad38f6d7bf22c0624c460b0a91dbca9a6242663: Status 404 returned error can't find the container with id ba499657f2f30c6dcb7e8a9c8ad38f6d7bf22c0624c460b0a91dbca9a6242663 Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.632140 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-hffnv"] Jan 31 09:16:16 crc kubenswrapper[4732]: I0131 09:16:16.690306 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-f734-account-create-update-wkhs4"] Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.032486 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-hffnv" event={"ID":"6d54d8c7-230a-4831-97a8-d17aef7fa6eb","Type":"ContainerStarted","Data":"fe93f24b1b1ca8fe4c671105ac0fbeb376225843e31d3614ced260876fce3216"} Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.032535 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-hffnv" event={"ID":"6d54d8c7-230a-4831-97a8-d17aef7fa6eb","Type":"ContainerStarted","Data":"ba499657f2f30c6dcb7e8a9c8ad38f6d7bf22c0624c460b0a91dbca9a6242663"} Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.035372 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" event={"ID":"bf9e4683-b288-4b28-9b9b-504461c55a4e","Type":"ContainerStarted","Data":"8c02dfae66fd3a4f5b4725177171b29a50b8756093929474b400347824b72530"} Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.035401 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" event={"ID":"bf9e4683-b288-4b28-9b9b-504461c55a4e","Type":"ContainerStarted","Data":"3022480a202c2c38f4b2f13a0e2f0a94f0e4c039b5760a147d6502f6f41d8cef"} Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.037717 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-r757b" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.037746 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-r757b" event={"ID":"65a47d3d-88f0-48b4-b672-9b224ead785f","Type":"ContainerDied","Data":"2972059b4750f41cbe4b42855dfe50fb8a59b3da0d14b490d2a8c37cd4ca4736"} Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.037761 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2972059b4750f41cbe4b42855dfe50fb8a59b3da0d14b490d2a8c37cd4ca4736" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.063370 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-db-create-hffnv" podStartSLOduration=2.063349338 podStartE2EDuration="2.063349338s" podCreationTimestamp="2026-01-31 09:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:16:17.061068396 +0000 UTC m=+915.366944610" watchObservedRunningTime="2026-01-31 09:16:17.063349338 +0000 UTC m=+915.369225542" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.081448 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" podStartSLOduration=2.08143095 podStartE2EDuration="2.08143095s" podCreationTimestamp="2026-01-31 09:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:16:17.078214538 +0000 UTC m=+915.384090742" watchObservedRunningTime="2026-01-31 09:16:17.08143095 +0000 UTC m=+915.387307154" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.183927 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-7959cb4f8b-4vxsg"] Jan 31 09:16:17 crc kubenswrapper[4732]: E0131 09:16:17.184238 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a47d3d-88f0-48b4-b672-9b224ead785f" containerName="keystone-bootstrap" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.184263 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a47d3d-88f0-48b4-b672-9b224ead785f" containerName="keystone-bootstrap" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.184439 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a47d3d-88f0-48b4-b672-9b224ead785f" containerName="keystone-bootstrap" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.185035 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.187622 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.187818 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-2sjhb" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.187929 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.189381 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.197703 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-7959cb4f8b-4vxsg"] Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.329593 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-config-data\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.329649 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-fernet-keys\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.329691 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-scripts\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.329954 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsj4z\" (UniqueName: \"kubernetes.io/projected/1ee84530-efd7-4d83-9aa2-fb9b8b178496-kube-api-access-jsj4z\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.330020 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-credential-keys\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.430945 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsj4z\" (UniqueName: \"kubernetes.io/projected/1ee84530-efd7-4d83-9aa2-fb9b8b178496-kube-api-access-jsj4z\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.431005 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-credential-keys\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.431052 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-config-data\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.431094 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-fernet-keys\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.431126 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-scripts\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.436051 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-scripts\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.436374 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-fernet-keys\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.436417 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-config-data\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.438225 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-credential-keys\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.450385 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsj4z\" (UniqueName: \"kubernetes.io/projected/1ee84530-efd7-4d83-9aa2-fb9b8b178496-kube-api-access-jsj4z\") pod \"keystone-7959cb4f8b-4vxsg\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.497683 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.497978 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.504703 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:17 crc kubenswrapper[4732]: I0131 09:16:17.941221 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-7959cb4f8b-4vxsg"] Jan 31 09:16:18 crc kubenswrapper[4732]: I0131 09:16:18.050121 4732 generic.go:334] "Generic (PLEG): container finished" podID="6d54d8c7-230a-4831-97a8-d17aef7fa6eb" containerID="fe93f24b1b1ca8fe4c671105ac0fbeb376225843e31d3614ced260876fce3216" exitCode=0 Jan 31 09:16:18 crc kubenswrapper[4732]: I0131 09:16:18.050219 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-hffnv" event={"ID":"6d54d8c7-230a-4831-97a8-d17aef7fa6eb","Type":"ContainerDied","Data":"fe93f24b1b1ca8fe4c671105ac0fbeb376225843e31d3614ced260876fce3216"} Jan 31 09:16:18 crc kubenswrapper[4732]: I0131 09:16:18.052755 4732 generic.go:334] "Generic (PLEG): container finished" podID="bf9e4683-b288-4b28-9b9b-504461c55a4e" containerID="8c02dfae66fd3a4f5b4725177171b29a50b8756093929474b400347824b72530" exitCode=0 Jan 31 09:16:18 crc kubenswrapper[4732]: I0131 09:16:18.052803 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" event={"ID":"bf9e4683-b288-4b28-9b9b-504461c55a4e","Type":"ContainerDied","Data":"8c02dfae66fd3a4f5b4725177171b29a50b8756093929474b400347824b72530"} Jan 31 09:16:18 crc kubenswrapper[4732]: I0131 09:16:18.054206 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" event={"ID":"1ee84530-efd7-4d83-9aa2-fb9b8b178496","Type":"ContainerStarted","Data":"dab5aaa05d738eb5bfa3b09e12be7ced9a61a97b3de7389937699c76857d4ec7"} Jan 31 09:16:18 crc kubenswrapper[4732]: I0131 09:16:18.654898 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qdc6"] Jan 31 09:16:18 crc kubenswrapper[4732]: I0131 09:16:18.655452 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6qdc6" podUID="5d0de300-63ee-4f40-9750-7eb7d5d10466" containerName="registry-server" containerID="cri-o://ee0c5fb66ad6b1e6b4f42d6ec9cbbd6de131ce1fe2d36272c309841e4eaa9b41" gracePeriod=2 Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.061512 4732 generic.go:334] "Generic (PLEG): container finished" podID="5d0de300-63ee-4f40-9750-7eb7d5d10466" containerID="ee0c5fb66ad6b1e6b4f42d6ec9cbbd6de131ce1fe2d36272c309841e4eaa9b41" exitCode=0 Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.061561 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qdc6" event={"ID":"5d0de300-63ee-4f40-9750-7eb7d5d10466","Type":"ContainerDied","Data":"ee0c5fb66ad6b1e6b4f42d6ec9cbbd6de131ce1fe2d36272c309841e4eaa9b41"} Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.061584 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qdc6" event={"ID":"5d0de300-63ee-4f40-9750-7eb7d5d10466","Type":"ContainerDied","Data":"a14dfd56c323ab432efc86e11648711e57a20ae7c23a10d2f26d38d578ad14dc"} Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.061595 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a14dfd56c323ab432efc86e11648711e57a20ae7c23a10d2f26d38d578ad14dc" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.064244 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" event={"ID":"1ee84530-efd7-4d83-9aa2-fb9b8b178496","Type":"ContainerStarted","Data":"2242e86748f2123e7c2371fd3207440c5237f2bf59a11e74432d9fe62c372932"} Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.064346 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.078369 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.109224 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" podStartSLOduration=2.10916686 podStartE2EDuration="2.10916686s" podCreationTimestamp="2026-01-31 09:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:16:19.089058164 +0000 UTC m=+917.394934368" watchObservedRunningTime="2026-01-31 09:16:19.10916686 +0000 UTC m=+917.415043064" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.154356 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d0de300-63ee-4f40-9750-7eb7d5d10466-utilities\") pod \"5d0de300-63ee-4f40-9750-7eb7d5d10466\" (UID: \"5d0de300-63ee-4f40-9750-7eb7d5d10466\") " Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.154789 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d0de300-63ee-4f40-9750-7eb7d5d10466-catalog-content\") pod \"5d0de300-63ee-4f40-9750-7eb7d5d10466\" (UID: \"5d0de300-63ee-4f40-9750-7eb7d5d10466\") " Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.154816 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7wmh\" (UniqueName: \"kubernetes.io/projected/5d0de300-63ee-4f40-9750-7eb7d5d10466-kube-api-access-n7wmh\") pod \"5d0de300-63ee-4f40-9750-7eb7d5d10466\" (UID: \"5d0de300-63ee-4f40-9750-7eb7d5d10466\") " Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.155228 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d0de300-63ee-4f40-9750-7eb7d5d10466-utilities" (OuterVolumeSpecName: "utilities") pod "5d0de300-63ee-4f40-9750-7eb7d5d10466" (UID: "5d0de300-63ee-4f40-9750-7eb7d5d10466"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.160642 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d0de300-63ee-4f40-9750-7eb7d5d10466-kube-api-access-n7wmh" (OuterVolumeSpecName: "kube-api-access-n7wmh") pod "5d0de300-63ee-4f40-9750-7eb7d5d10466" (UID: "5d0de300-63ee-4f40-9750-7eb7d5d10466"). InnerVolumeSpecName "kube-api-access-n7wmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.213917 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d0de300-63ee-4f40-9750-7eb7d5d10466-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d0de300-63ee-4f40-9750-7eb7d5d10466" (UID: "5d0de300-63ee-4f40-9750-7eb7d5d10466"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.256776 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d0de300-63ee-4f40-9750-7eb7d5d10466-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.256813 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7wmh\" (UniqueName: \"kubernetes.io/projected/5d0de300-63ee-4f40-9750-7eb7d5d10466-kube-api-access-n7wmh\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.256829 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d0de300-63ee-4f40-9750-7eb7d5d10466-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.385145 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.426566 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-hffnv" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.459731 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf9e4683-b288-4b28-9b9b-504461c55a4e-operator-scripts\") pod \"bf9e4683-b288-4b28-9b9b-504461c55a4e\" (UID: \"bf9e4683-b288-4b28-9b9b-504461c55a4e\") " Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.459818 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpxkb\" (UniqueName: \"kubernetes.io/projected/bf9e4683-b288-4b28-9b9b-504461c55a4e-kube-api-access-lpxkb\") pod \"bf9e4683-b288-4b28-9b9b-504461c55a4e\" (UID: \"bf9e4683-b288-4b28-9b9b-504461c55a4e\") " Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.460396 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf9e4683-b288-4b28-9b9b-504461c55a4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf9e4683-b288-4b28-9b9b-504461c55a4e" (UID: "bf9e4683-b288-4b28-9b9b-504461c55a4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.465021 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf9e4683-b288-4b28-9b9b-504461c55a4e-kube-api-access-lpxkb" (OuterVolumeSpecName: "kube-api-access-lpxkb") pod "bf9e4683-b288-4b28-9b9b-504461c55a4e" (UID: "bf9e4683-b288-4b28-9b9b-504461c55a4e"). InnerVolumeSpecName "kube-api-access-lpxkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.561142 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d54d8c7-230a-4831-97a8-d17aef7fa6eb-operator-scripts\") pod \"6d54d8c7-230a-4831-97a8-d17aef7fa6eb\" (UID: \"6d54d8c7-230a-4831-97a8-d17aef7fa6eb\") " Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.561259 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb9kr\" (UniqueName: \"kubernetes.io/projected/6d54d8c7-230a-4831-97a8-d17aef7fa6eb-kube-api-access-pb9kr\") pod \"6d54d8c7-230a-4831-97a8-d17aef7fa6eb\" (UID: \"6d54d8c7-230a-4831-97a8-d17aef7fa6eb\") " Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.561615 4732 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf9e4683-b288-4b28-9b9b-504461c55a4e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.561644 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpxkb\" (UniqueName: \"kubernetes.io/projected/bf9e4683-b288-4b28-9b9b-504461c55a4e-kube-api-access-lpxkb\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.561637 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d54d8c7-230a-4831-97a8-d17aef7fa6eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d54d8c7-230a-4831-97a8-d17aef7fa6eb" (UID: "6d54d8c7-230a-4831-97a8-d17aef7fa6eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.564594 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d54d8c7-230a-4831-97a8-d17aef7fa6eb-kube-api-access-pb9kr" (OuterVolumeSpecName: "kube-api-access-pb9kr") pod "6d54d8c7-230a-4831-97a8-d17aef7fa6eb" (UID: "6d54d8c7-230a-4831-97a8-d17aef7fa6eb"). InnerVolumeSpecName "kube-api-access-pb9kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.662412 4732 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d54d8c7-230a-4831-97a8-d17aef7fa6eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.662443 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb9kr\" (UniqueName: \"kubernetes.io/projected/6d54d8c7-230a-4831-97a8-d17aef7fa6eb-kube-api-access-pb9kr\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.849137 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-zhkcf"] Jan 31 09:16:19 crc kubenswrapper[4732]: E0131 09:16:19.849422 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d54d8c7-230a-4831-97a8-d17aef7fa6eb" containerName="mariadb-database-create" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.849438 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d54d8c7-230a-4831-97a8-d17aef7fa6eb" containerName="mariadb-database-create" Jan 31 09:16:19 crc kubenswrapper[4732]: E0131 09:16:19.849454 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0de300-63ee-4f40-9750-7eb7d5d10466" containerName="registry-server" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.849461 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0de300-63ee-4f40-9750-7eb7d5d10466" containerName="registry-server" Jan 31 09:16:19 crc kubenswrapper[4732]: E0131 09:16:19.849474 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0de300-63ee-4f40-9750-7eb7d5d10466" containerName="extract-content" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.849482 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0de300-63ee-4f40-9750-7eb7d5d10466" containerName="extract-content" Jan 31 09:16:19 crc kubenswrapper[4732]: E0131 09:16:19.849492 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9e4683-b288-4b28-9b9b-504461c55a4e" containerName="mariadb-account-create-update" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.849499 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9e4683-b288-4b28-9b9b-504461c55a4e" containerName="mariadb-account-create-update" Jan 31 09:16:19 crc kubenswrapper[4732]: E0131 09:16:19.849519 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0de300-63ee-4f40-9750-7eb7d5d10466" containerName="extract-utilities" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.849524 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0de300-63ee-4f40-9750-7eb7d5d10466" containerName="extract-utilities" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.849651 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d54d8c7-230a-4831-97a8-d17aef7fa6eb" containerName="mariadb-database-create" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.849681 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9e4683-b288-4b28-9b9b-504461c55a4e" containerName="mariadb-account-create-update" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.849692 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0de300-63ee-4f40-9750-7eb7d5d10466" containerName="registry-server" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.850101 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-zhkcf" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.852531 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-index-dockercfg-lc87l" Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.903706 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-zhkcf"] Jan 31 09:16:19 crc kubenswrapper[4732]: I0131 09:16:19.974365 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59577\" (UniqueName: \"kubernetes.io/projected/24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf-kube-api-access-59577\") pod \"swift-operator-index-zhkcf\" (UID: \"24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf\") " pod="openstack-operators/swift-operator-index-zhkcf" Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.071948 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-hffnv" event={"ID":"6d54d8c7-230a-4831-97a8-d17aef7fa6eb","Type":"ContainerDied","Data":"ba499657f2f30c6dcb7e8a9c8ad38f6d7bf22c0624c460b0a91dbca9a6242663"} Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.071980 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-hffnv" Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.071987 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba499657f2f30c6dcb7e8a9c8ad38f6d7bf22c0624c460b0a91dbca9a6242663" Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.073421 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" event={"ID":"bf9e4683-b288-4b28-9b9b-504461c55a4e","Type":"ContainerDied","Data":"3022480a202c2c38f4b2f13a0e2f0a94f0e4c039b5760a147d6502f6f41d8cef"} Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.073459 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qdc6" Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.073471 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3022480a202c2c38f4b2f13a0e2f0a94f0e4c039b5760a147d6502f6f41d8cef" Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.073474 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-f734-account-create-update-wkhs4" Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.076097 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59577\" (UniqueName: \"kubernetes.io/projected/24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf-kube-api-access-59577\") pod \"swift-operator-index-zhkcf\" (UID: \"24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf\") " pod="openstack-operators/swift-operator-index-zhkcf" Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.116735 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59577\" (UniqueName: \"kubernetes.io/projected/24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf-kube-api-access-59577\") pod \"swift-operator-index-zhkcf\" (UID: \"24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf\") " pod="openstack-operators/swift-operator-index-zhkcf" Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.118700 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qdc6"] Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.128698 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6qdc6"] Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.185809 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-zhkcf" Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.551246 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d0de300-63ee-4f40-9750-7eb7d5d10466" path="/var/lib/kubelet/pods/5d0de300-63ee-4f40-9750-7eb7d5d10466/volumes" Jan 31 09:16:20 crc kubenswrapper[4732]: I0131 09:16:20.593037 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-zhkcf"] Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.083840 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-zhkcf" event={"ID":"24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf","Type":"ContainerStarted","Data":"8514ba99bedbc8f5d369f906a000fdaa79e2f95c8cdc60f9e5782dca2dcdc8ab"} Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.219225 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-sync-482fl"] Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.220321 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-482fl" Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.229504 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-vshjt" Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.229798 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.230277 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-482fl"] Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.294557 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/52cae0eb-413d-4365-a717-8039a3e3b99f-db-sync-config-data\") pod \"barbican-db-sync-482fl\" (UID: \"52cae0eb-413d-4365-a717-8039a3e3b99f\") " pod="swift-kuttl-tests/barbican-db-sync-482fl" Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.295022 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-258tr\" (UniqueName: \"kubernetes.io/projected/52cae0eb-413d-4365-a717-8039a3e3b99f-kube-api-access-258tr\") pod \"barbican-db-sync-482fl\" (UID: \"52cae0eb-413d-4365-a717-8039a3e3b99f\") " pod="swift-kuttl-tests/barbican-db-sync-482fl" Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.396156 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/52cae0eb-413d-4365-a717-8039a3e3b99f-db-sync-config-data\") pod \"barbican-db-sync-482fl\" (UID: \"52cae0eb-413d-4365-a717-8039a3e3b99f\") " pod="swift-kuttl-tests/barbican-db-sync-482fl" Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.396843 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-258tr\" (UniqueName: \"kubernetes.io/projected/52cae0eb-413d-4365-a717-8039a3e3b99f-kube-api-access-258tr\") pod \"barbican-db-sync-482fl\" (UID: \"52cae0eb-413d-4365-a717-8039a3e3b99f\") " pod="swift-kuttl-tests/barbican-db-sync-482fl" Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.401888 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/52cae0eb-413d-4365-a717-8039a3e3b99f-db-sync-config-data\") pod \"barbican-db-sync-482fl\" (UID: \"52cae0eb-413d-4365-a717-8039a3e3b99f\") " pod="swift-kuttl-tests/barbican-db-sync-482fl" Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.415925 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-258tr\" (UniqueName: \"kubernetes.io/projected/52cae0eb-413d-4365-a717-8039a3e3b99f-kube-api-access-258tr\") pod \"barbican-db-sync-482fl\" (UID: \"52cae0eb-413d-4365-a717-8039a3e3b99f\") " pod="swift-kuttl-tests/barbican-db-sync-482fl" Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.548263 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-482fl" Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.967916 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:21 crc kubenswrapper[4732]: I0131 09:16:21.967964 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:22 crc kubenswrapper[4732]: I0131 09:16:22.015310 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:22 crc kubenswrapper[4732]: I0131 09:16:22.127391 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:22 crc kubenswrapper[4732]: I0131 09:16:22.654584 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-482fl"] Jan 31 09:16:22 crc kubenswrapper[4732]: W0131 09:16:22.944705 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52cae0eb_413d_4365_a717_8039a3e3b99f.slice/crio-fb6ccbaa8edb2b56ccc907d46d09438babcaeb68d1e54e31b63b92be8948762d WatchSource:0}: Error finding container fb6ccbaa8edb2b56ccc907d46d09438babcaeb68d1e54e31b63b92be8948762d: Status 404 returned error can't find the container with id fb6ccbaa8edb2b56ccc907d46d09438babcaeb68d1e54e31b63b92be8948762d Jan 31 09:16:23 crc kubenswrapper[4732]: I0131 09:16:23.101254 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-482fl" event={"ID":"52cae0eb-413d-4365-a717-8039a3e3b99f","Type":"ContainerStarted","Data":"fb6ccbaa8edb2b56ccc907d46d09438babcaeb68d1e54e31b63b92be8948762d"} Jan 31 09:16:23 crc kubenswrapper[4732]: I0131 09:16:23.437320 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4r9l"] Jan 31 09:16:24 crc kubenswrapper[4732]: I0131 09:16:24.111928 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-zhkcf" event={"ID":"24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf","Type":"ContainerStarted","Data":"27e7fcfe2e3b7ddaff4e021897bbeb8cb28014e689190c68e97e8b5269f0181f"} Jan 31 09:16:24 crc kubenswrapper[4732]: I0131 09:16:24.133170 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-zhkcf" podStartSLOduration=2.735526209 podStartE2EDuration="5.133138771s" podCreationTimestamp="2026-01-31 09:16:19 +0000 UTC" firstStartedPulling="2026-01-31 09:16:20.596597653 +0000 UTC m=+918.902473867" lastFinishedPulling="2026-01-31 09:16:22.994210225 +0000 UTC m=+921.300086429" observedRunningTime="2026-01-31 09:16:24.126206431 +0000 UTC m=+922.432082635" watchObservedRunningTime="2026-01-31 09:16:24.133138771 +0000 UTC m=+922.439015015" Jan 31 09:16:25 crc kubenswrapper[4732]: I0131 09:16:25.118054 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b4r9l" podUID="30d15a7c-e3f9-4280-b0d3-39264c464abe" containerName="registry-server" containerID="cri-o://aaaf66b396de81920559109560059a25ed5b958b5791039a68fba6ae2b9ed583" gracePeriod=2 Jan 31 09:16:26 crc kubenswrapper[4732]: I0131 09:16:26.126199 4732 generic.go:334] "Generic (PLEG): container finished" podID="30d15a7c-e3f9-4280-b0d3-39264c464abe" containerID="aaaf66b396de81920559109560059a25ed5b958b5791039a68fba6ae2b9ed583" exitCode=0 Jan 31 09:16:26 crc kubenswrapper[4732]: I0131 09:16:26.126280 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4r9l" event={"ID":"30d15a7c-e3f9-4280-b0d3-39264c464abe","Type":"ContainerDied","Data":"aaaf66b396de81920559109560059a25ed5b958b5791039a68fba6ae2b9ed583"} Jan 31 09:16:26 crc kubenswrapper[4732]: I0131 09:16:26.873912 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:26 crc kubenswrapper[4732]: I0131 09:16:26.979043 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d15a7c-e3f9-4280-b0d3-39264c464abe-utilities\") pod \"30d15a7c-e3f9-4280-b0d3-39264c464abe\" (UID: \"30d15a7c-e3f9-4280-b0d3-39264c464abe\") " Jan 31 09:16:26 crc kubenswrapper[4732]: I0131 09:16:26.979128 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d15a7c-e3f9-4280-b0d3-39264c464abe-catalog-content\") pod \"30d15a7c-e3f9-4280-b0d3-39264c464abe\" (UID: \"30d15a7c-e3f9-4280-b0d3-39264c464abe\") " Jan 31 09:16:26 crc kubenswrapper[4732]: I0131 09:16:26.979296 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd27r\" (UniqueName: \"kubernetes.io/projected/30d15a7c-e3f9-4280-b0d3-39264c464abe-kube-api-access-dd27r\") pod \"30d15a7c-e3f9-4280-b0d3-39264c464abe\" (UID: \"30d15a7c-e3f9-4280-b0d3-39264c464abe\") " Jan 31 09:16:26 crc kubenswrapper[4732]: I0131 09:16:26.980459 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30d15a7c-e3f9-4280-b0d3-39264c464abe-utilities" (OuterVolumeSpecName: "utilities") pod "30d15a7c-e3f9-4280-b0d3-39264c464abe" (UID: "30d15a7c-e3f9-4280-b0d3-39264c464abe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:16:26 crc kubenswrapper[4732]: I0131 09:16:26.985785 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d15a7c-e3f9-4280-b0d3-39264c464abe-kube-api-access-dd27r" (OuterVolumeSpecName: "kube-api-access-dd27r") pod "30d15a7c-e3f9-4280-b0d3-39264c464abe" (UID: "30d15a7c-e3f9-4280-b0d3-39264c464abe"). InnerVolumeSpecName "kube-api-access-dd27r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:16:27 crc kubenswrapper[4732]: I0131 09:16:27.034934 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30d15a7c-e3f9-4280-b0d3-39264c464abe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30d15a7c-e3f9-4280-b0d3-39264c464abe" (UID: "30d15a7c-e3f9-4280-b0d3-39264c464abe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:16:27 crc kubenswrapper[4732]: I0131 09:16:27.080594 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d15a7c-e3f9-4280-b0d3-39264c464abe-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:27 crc kubenswrapper[4732]: I0131 09:16:27.080628 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d15a7c-e3f9-4280-b0d3-39264c464abe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:27 crc kubenswrapper[4732]: I0131 09:16:27.080642 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd27r\" (UniqueName: \"kubernetes.io/projected/30d15a7c-e3f9-4280-b0d3-39264c464abe-kube-api-access-dd27r\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:27 crc kubenswrapper[4732]: I0131 09:16:27.139425 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4r9l" event={"ID":"30d15a7c-e3f9-4280-b0d3-39264c464abe","Type":"ContainerDied","Data":"d41d176201ea47fe3677b834f0031fb82b32fbeaf46af88d3969bcb67324887b"} Jan 31 09:16:27 crc kubenswrapper[4732]: I0131 09:16:27.139445 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4r9l" Jan 31 09:16:27 crc kubenswrapper[4732]: I0131 09:16:27.139494 4732 scope.go:117] "RemoveContainer" containerID="aaaf66b396de81920559109560059a25ed5b958b5791039a68fba6ae2b9ed583" Jan 31 09:16:27 crc kubenswrapper[4732]: I0131 09:16:27.157073 4732 scope.go:117] "RemoveContainer" containerID="f8fa3bcb0182e184d81f9b95c68a6ba376603e25fdaca53ae4f6c19daf283637" Jan 31 09:16:27 crc kubenswrapper[4732]: I0131 09:16:27.177420 4732 scope.go:117] "RemoveContainer" containerID="cc7bdcfd3c4cf42fbbd1013cc743457dedaafba9028f44513b712048f986520e" Jan 31 09:16:27 crc kubenswrapper[4732]: I0131 09:16:27.178788 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4r9l"] Jan 31 09:16:27 crc kubenswrapper[4732]: I0131 09:16:27.184252 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b4r9l"] Jan 31 09:16:28 crc kubenswrapper[4732]: I0131 09:16:28.150455 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-482fl" event={"ID":"52cae0eb-413d-4365-a717-8039a3e3b99f","Type":"ContainerStarted","Data":"fecab232c8eea055dcb26c6531c6bec4d55c835988b0fcd4b2a823ee72e633d6"} Jan 31 09:16:28 crc kubenswrapper[4732]: I0131 09:16:28.552462 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d15a7c-e3f9-4280-b0d3-39264c464abe" path="/var/lib/kubelet/pods/30d15a7c-e3f9-4280-b0d3-39264c464abe/volumes" Jan 31 09:16:30 crc kubenswrapper[4732]: I0131 09:16:30.163495 4732 generic.go:334] "Generic (PLEG): container finished" podID="52cae0eb-413d-4365-a717-8039a3e3b99f" containerID="fecab232c8eea055dcb26c6531c6bec4d55c835988b0fcd4b2a823ee72e633d6" exitCode=0 Jan 31 09:16:30 crc kubenswrapper[4732]: I0131 09:16:30.163540 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-482fl" event={"ID":"52cae0eb-413d-4365-a717-8039a3e3b99f","Type":"ContainerDied","Data":"fecab232c8eea055dcb26c6531c6bec4d55c835988b0fcd4b2a823ee72e633d6"} Jan 31 09:16:30 crc kubenswrapper[4732]: I0131 09:16:30.223550 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-zhkcf" Jan 31 09:16:30 crc kubenswrapper[4732]: I0131 09:16:30.223978 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/swift-operator-index-zhkcf" Jan 31 09:16:30 crc kubenswrapper[4732]: I0131 09:16:30.266016 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/swift-operator-index-zhkcf" Jan 31 09:16:31 crc kubenswrapper[4732]: I0131 09:16:31.194038 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-index-zhkcf" Jan 31 09:16:31 crc kubenswrapper[4732]: I0131 09:16:31.490280 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-482fl" Jan 31 09:16:31 crc kubenswrapper[4732]: I0131 09:16:31.571262 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/52cae0eb-413d-4365-a717-8039a3e3b99f-db-sync-config-data\") pod \"52cae0eb-413d-4365-a717-8039a3e3b99f\" (UID: \"52cae0eb-413d-4365-a717-8039a3e3b99f\") " Jan 31 09:16:31 crc kubenswrapper[4732]: I0131 09:16:31.571318 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-258tr\" (UniqueName: \"kubernetes.io/projected/52cae0eb-413d-4365-a717-8039a3e3b99f-kube-api-access-258tr\") pod \"52cae0eb-413d-4365-a717-8039a3e3b99f\" (UID: \"52cae0eb-413d-4365-a717-8039a3e3b99f\") " Jan 31 09:16:31 crc kubenswrapper[4732]: I0131 09:16:31.578692 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cae0eb-413d-4365-a717-8039a3e3b99f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "52cae0eb-413d-4365-a717-8039a3e3b99f" (UID: "52cae0eb-413d-4365-a717-8039a3e3b99f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:16:31 crc kubenswrapper[4732]: I0131 09:16:31.578980 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52cae0eb-413d-4365-a717-8039a3e3b99f-kube-api-access-258tr" (OuterVolumeSpecName: "kube-api-access-258tr") pod "52cae0eb-413d-4365-a717-8039a3e3b99f" (UID: "52cae0eb-413d-4365-a717-8039a3e3b99f"). InnerVolumeSpecName "kube-api-access-258tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:16:31 crc kubenswrapper[4732]: I0131 09:16:31.673613 4732 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/52cae0eb-413d-4365-a717-8039a3e3b99f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:31 crc kubenswrapper[4732]: I0131 09:16:31.673673 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-258tr\" (UniqueName: \"kubernetes.io/projected/52cae0eb-413d-4365-a717-8039a3e3b99f-kube-api-access-258tr\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.179102 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-482fl" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.179079 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-482fl" event={"ID":"52cae0eb-413d-4365-a717-8039a3e3b99f","Type":"ContainerDied","Data":"fb6ccbaa8edb2b56ccc907d46d09438babcaeb68d1e54e31b63b92be8948762d"} Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.179162 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb6ccbaa8edb2b56ccc907d46d09438babcaeb68d1e54e31b63b92be8948762d" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.501735 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-worker-597f49d4f-8nh24"] Jan 31 09:16:32 crc kubenswrapper[4732]: E0131 09:16:32.502386 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52cae0eb-413d-4365-a717-8039a3e3b99f" containerName="barbican-db-sync" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.502400 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="52cae0eb-413d-4365-a717-8039a3e3b99f" containerName="barbican-db-sync" Jan 31 09:16:32 crc kubenswrapper[4732]: E0131 09:16:32.502417 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d15a7c-e3f9-4280-b0d3-39264c464abe" containerName="registry-server" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.502423 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d15a7c-e3f9-4280-b0d3-39264c464abe" containerName="registry-server" Jan 31 09:16:32 crc kubenswrapper[4732]: E0131 09:16:32.502432 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d15a7c-e3f9-4280-b0d3-39264c464abe" containerName="extract-content" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.502438 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d15a7c-e3f9-4280-b0d3-39264c464abe" containerName="extract-content" Jan 31 09:16:32 crc kubenswrapper[4732]: E0131 09:16:32.502455 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d15a7c-e3f9-4280-b0d3-39264c464abe" containerName="extract-utilities" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.502461 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d15a7c-e3f9-4280-b0d3-39264c464abe" containerName="extract-utilities" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.502574 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="52cae0eb-413d-4365-a717-8039a3e3b99f" containerName="barbican-db-sync" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.502584 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d15a7c-e3f9-4280-b0d3-39264c464abe" containerName="registry-server" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.503373 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.508569 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-vshjt" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.508751 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.508784 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-worker-config-data" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.509617 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8"] Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.510934 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.512915 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-keystone-listener-config-data" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.516899 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-597f49d4f-8nh24"] Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.530812 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8"] Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.585476 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b1cc40-9985-45d8-bb06-0676ff188c6c-config-data\") pod \"barbican-worker-597f49d4f-8nh24\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.585538 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bkhk\" (UniqueName: \"kubernetes.io/projected/7728f3b2-7258-444d-982b-10d416bb61f0-kube-api-access-6bkhk\") pod \"barbican-keystone-listener-5f5b7fdb46-c8wb8\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.585557 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3b1cc40-9985-45d8-bb06-0676ff188c6c-logs\") pod \"barbican-worker-597f49d4f-8nh24\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.585590 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7728f3b2-7258-444d-982b-10d416bb61f0-config-data\") pod \"barbican-keystone-listener-5f5b7fdb46-c8wb8\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.585620 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7728f3b2-7258-444d-982b-10d416bb61f0-config-data-custom\") pod \"barbican-keystone-listener-5f5b7fdb46-c8wb8\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.585641 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c684\" (UniqueName: \"kubernetes.io/projected/d3b1cc40-9985-45d8-bb06-0676ff188c6c-kube-api-access-5c684\") pod \"barbican-worker-597f49d4f-8nh24\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.585690 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3b1cc40-9985-45d8-bb06-0676ff188c6c-config-data-custom\") pod \"barbican-worker-597f49d4f-8nh24\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.585722 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7728f3b2-7258-444d-982b-10d416bb61f0-logs\") pod \"barbican-keystone-listener-5f5b7fdb46-c8wb8\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.638815 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq"] Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.639787 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.641584 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-api-config-data" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.650643 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq"] Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.687122 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7728f3b2-7258-444d-982b-10d416bb61f0-logs\") pod \"barbican-keystone-listener-5f5b7fdb46-c8wb8\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.687202 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b1cc40-9985-45d8-bb06-0676ff188c6c-config-data\") pod \"barbican-worker-597f49d4f-8nh24\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.687244 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bkhk\" (UniqueName: \"kubernetes.io/projected/7728f3b2-7258-444d-982b-10d416bb61f0-kube-api-access-6bkhk\") pod \"barbican-keystone-listener-5f5b7fdb46-c8wb8\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.687271 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-logs\") pod \"barbican-api-5bb4486f46-rmsgq\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.687293 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3b1cc40-9985-45d8-bb06-0676ff188c6c-logs\") pod \"barbican-worker-597f49d4f-8nh24\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.687319 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-config-data\") pod \"barbican-api-5bb4486f46-rmsgq\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.687354 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7728f3b2-7258-444d-982b-10d416bb61f0-config-data\") pod \"barbican-keystone-listener-5f5b7fdb46-c8wb8\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.687388 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7728f3b2-7258-444d-982b-10d416bb61f0-config-data-custom\") pod \"barbican-keystone-listener-5f5b7fdb46-c8wb8\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.687417 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c684\" (UniqueName: \"kubernetes.io/projected/d3b1cc40-9985-45d8-bb06-0676ff188c6c-kube-api-access-5c684\") pod \"barbican-worker-597f49d4f-8nh24\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.687453 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lp6p\" (UniqueName: \"kubernetes.io/projected/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-kube-api-access-5lp6p\") pod \"barbican-api-5bb4486f46-rmsgq\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.687474 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-config-data-custom\") pod \"barbican-api-5bb4486f46-rmsgq\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.687537 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3b1cc40-9985-45d8-bb06-0676ff188c6c-config-data-custom\") pod \"barbican-worker-597f49d4f-8nh24\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.687876 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3b1cc40-9985-45d8-bb06-0676ff188c6c-logs\") pod \"barbican-worker-597f49d4f-8nh24\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.688203 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7728f3b2-7258-444d-982b-10d416bb61f0-logs\") pod \"barbican-keystone-listener-5f5b7fdb46-c8wb8\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.692189 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7728f3b2-7258-444d-982b-10d416bb61f0-config-data-custom\") pod \"barbican-keystone-listener-5f5b7fdb46-c8wb8\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.692433 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7728f3b2-7258-444d-982b-10d416bb61f0-config-data\") pod \"barbican-keystone-listener-5f5b7fdb46-c8wb8\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.693822 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b1cc40-9985-45d8-bb06-0676ff188c6c-config-data\") pod \"barbican-worker-597f49d4f-8nh24\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.700325 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3b1cc40-9985-45d8-bb06-0676ff188c6c-config-data-custom\") pod \"barbican-worker-597f49d4f-8nh24\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.703115 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bkhk\" (UniqueName: \"kubernetes.io/projected/7728f3b2-7258-444d-982b-10d416bb61f0-kube-api-access-6bkhk\") pod \"barbican-keystone-listener-5f5b7fdb46-c8wb8\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.712599 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c684\" (UniqueName: \"kubernetes.io/projected/d3b1cc40-9985-45d8-bb06-0676ff188c6c-kube-api-access-5c684\") pod \"barbican-worker-597f49d4f-8nh24\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.788403 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-logs\") pod \"barbican-api-5bb4486f46-rmsgq\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.788465 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-config-data\") pod \"barbican-api-5bb4486f46-rmsgq\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.788546 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lp6p\" (UniqueName: \"kubernetes.io/projected/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-kube-api-access-5lp6p\") pod \"barbican-api-5bb4486f46-rmsgq\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.788577 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-config-data-custom\") pod \"barbican-api-5bb4486f46-rmsgq\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.789459 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-logs\") pod \"barbican-api-5bb4486f46-rmsgq\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.794843 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-config-data-custom\") pod \"barbican-api-5bb4486f46-rmsgq\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.800976 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-config-data\") pod \"barbican-api-5bb4486f46-rmsgq\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.820602 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lp6p\" (UniqueName: \"kubernetes.io/projected/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-kube-api-access-5lp6p\") pod \"barbican-api-5bb4486f46-rmsgq\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.824357 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.833681 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:16:32 crc kubenswrapper[4732]: I0131 09:16:32.959991 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:33 crc kubenswrapper[4732]: I0131 09:16:33.259718 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-597f49d4f-8nh24"] Jan 31 09:16:33 crc kubenswrapper[4732]: W0131 09:16:33.263184 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3b1cc40_9985_45d8_bb06_0676ff188c6c.slice/crio-349666c65ccee57c71b989263a2d6e9c2979da105d5b7e91fe8eeec62ef91646 WatchSource:0}: Error finding container 349666c65ccee57c71b989263a2d6e9c2979da105d5b7e91fe8eeec62ef91646: Status 404 returned error can't find the container with id 349666c65ccee57c71b989263a2d6e9c2979da105d5b7e91fe8eeec62ef91646 Jan 31 09:16:33 crc kubenswrapper[4732]: I0131 09:16:33.340259 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8"] Jan 31 09:16:33 crc kubenswrapper[4732]: W0131 09:16:33.347202 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7728f3b2_7258_444d_982b_10d416bb61f0.slice/crio-5117dba79725a077cd6ab2eb2327b8a7f198da1d9f337e252448199e446feb3a WatchSource:0}: Error finding container 5117dba79725a077cd6ab2eb2327b8a7f198da1d9f337e252448199e446feb3a: Status 404 returned error can't find the container with id 5117dba79725a077cd6ab2eb2327b8a7f198da1d9f337e252448199e446feb3a Jan 31 09:16:33 crc kubenswrapper[4732]: I0131 09:16:33.440027 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq"] Jan 31 09:16:34 crc kubenswrapper[4732]: I0131 09:16:34.209222 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" event={"ID":"4981d9a9-898f-49ff-809d-58c7ca3bd2a3","Type":"ContainerStarted","Data":"341aa4e395d7929014e446a8184c4ab222c0434acdf71dbf1596f5fc5c67f4a8"} Jan 31 09:16:34 crc kubenswrapper[4732]: I0131 09:16:34.209802 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" event={"ID":"4981d9a9-898f-49ff-809d-58c7ca3bd2a3","Type":"ContainerStarted","Data":"2d6ada7bb6e0aa8ec000e73dc78802992d92ff017f59a57893472c54d628fcd1"} Jan 31 09:16:34 crc kubenswrapper[4732]: I0131 09:16:34.209837 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:34 crc kubenswrapper[4732]: I0131 09:16:34.209847 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" event={"ID":"4981d9a9-898f-49ff-809d-58c7ca3bd2a3","Type":"ContainerStarted","Data":"dc8a708608424d3770f138159a52c830a7b371ad68ddd44083bf847d118f3337"} Jan 31 09:16:34 crc kubenswrapper[4732]: I0131 09:16:34.209858 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:34 crc kubenswrapper[4732]: I0131 09:16:34.210301 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" event={"ID":"d3b1cc40-9985-45d8-bb06-0676ff188c6c","Type":"ContainerStarted","Data":"349666c65ccee57c71b989263a2d6e9c2979da105d5b7e91fe8eeec62ef91646"} Jan 31 09:16:34 crc kubenswrapper[4732]: I0131 09:16:34.212933 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" event={"ID":"7728f3b2-7258-444d-982b-10d416bb61f0","Type":"ContainerStarted","Data":"5117dba79725a077cd6ab2eb2327b8a7f198da1d9f337e252448199e446feb3a"} Jan 31 09:16:34 crc kubenswrapper[4732]: I0131 09:16:34.232290 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" podStartSLOduration=2.23227087 podStartE2EDuration="2.23227087s" podCreationTimestamp="2026-01-31 09:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:16:34.228839533 +0000 UTC m=+932.534715787" watchObservedRunningTime="2026-01-31 09:16:34.23227087 +0000 UTC m=+932.538147094" Jan 31 09:16:35 crc kubenswrapper[4732]: I0131 09:16:35.226763 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" event={"ID":"d3b1cc40-9985-45d8-bb06-0676ff188c6c","Type":"ContainerStarted","Data":"61654a37d88891f02e083badeb8b5cfe93229dadee0e8b7d4de9237b19518962"} Jan 31 09:16:35 crc kubenswrapper[4732]: I0131 09:16:35.228544 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" event={"ID":"7728f3b2-7258-444d-982b-10d416bb61f0","Type":"ContainerStarted","Data":"d4d07af4bd1a6391d336e9eb08f85f38b781a53ac9cbe6e99069d22ae367b8eb"} Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.238158 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" event={"ID":"d3b1cc40-9985-45d8-bb06-0676ff188c6c","Type":"ContainerStarted","Data":"9bf2a11eb61aee168266496fbc63cf87fd0863d3de9bd60d727904eb9b9f54bf"} Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.241706 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" event={"ID":"7728f3b2-7258-444d-982b-10d416bb61f0","Type":"ContainerStarted","Data":"e43f0104f4b5c35da143cfe8b1821aeb80b3326882a44cc53de6938465a118a2"} Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.288550 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" podStartSLOduration=2.71358618 podStartE2EDuration="4.288531202s" podCreationTimestamp="2026-01-31 09:16:32 +0000 UTC" firstStartedPulling="2026-01-31 09:16:33.265351407 +0000 UTC m=+931.571227611" lastFinishedPulling="2026-01-31 09:16:34.840296429 +0000 UTC m=+933.146172633" observedRunningTime="2026-01-31 09:16:36.270971266 +0000 UTC m=+934.576847480" watchObservedRunningTime="2026-01-31 09:16:36.288531202 +0000 UTC m=+934.594407396" Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.290216 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" podStartSLOduration=2.796252546 podStartE2EDuration="4.290208795s" podCreationTimestamp="2026-01-31 09:16:32 +0000 UTC" firstStartedPulling="2026-01-31 09:16:33.348980913 +0000 UTC m=+931.654857117" lastFinishedPulling="2026-01-31 09:16:34.842937172 +0000 UTC m=+933.148813366" observedRunningTime="2026-01-31 09:16:36.288046496 +0000 UTC m=+934.593922700" watchObservedRunningTime="2026-01-31 09:16:36.290208795 +0000 UTC m=+934.596084999" Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.688458 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv"] Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.689916 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.691992 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tnztr" Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.704199 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv"] Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.746325 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x8fp\" (UniqueName: \"kubernetes.io/projected/a8db73f4-a3fd-4276-87eb-69db3df2adb6-kube-api-access-9x8fp\") pod \"0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv\" (UID: \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\") " pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.746482 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8db73f4-a3fd-4276-87eb-69db3df2adb6-bundle\") pod \"0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv\" (UID: \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\") " pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.746517 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8db73f4-a3fd-4276-87eb-69db3df2adb6-util\") pod \"0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv\" (UID: \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\") " pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.848367 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x8fp\" (UniqueName: \"kubernetes.io/projected/a8db73f4-a3fd-4276-87eb-69db3df2adb6-kube-api-access-9x8fp\") pod \"0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv\" (UID: \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\") " pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.848494 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8db73f4-a3fd-4276-87eb-69db3df2adb6-bundle\") pod \"0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv\" (UID: \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\") " pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.848524 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8db73f4-a3fd-4276-87eb-69db3df2adb6-util\") pod \"0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv\" (UID: \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\") " pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.849322 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8db73f4-a3fd-4276-87eb-69db3df2adb6-util\") pod \"0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv\" (UID: \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\") " pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.849337 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8db73f4-a3fd-4276-87eb-69db3df2adb6-bundle\") pod \"0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv\" (UID: \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\") " pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" Jan 31 09:16:36 crc kubenswrapper[4732]: I0131 09:16:36.867273 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x8fp\" (UniqueName: \"kubernetes.io/projected/a8db73f4-a3fd-4276-87eb-69db3df2adb6-kube-api-access-9x8fp\") pod \"0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv\" (UID: \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\") " pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" Jan 31 09:16:37 crc kubenswrapper[4732]: I0131 09:16:37.039755 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" Jan 31 09:16:37 crc kubenswrapper[4732]: I0131 09:16:37.473462 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv"] Jan 31 09:16:38 crc kubenswrapper[4732]: I0131 09:16:38.264301 4732 generic.go:334] "Generic (PLEG): container finished" podID="a8db73f4-a3fd-4276-87eb-69db3df2adb6" containerID="a86707b4de5b35e9337d2beb5deb8d861dead56d2dc194d07926deea0b9a63f5" exitCode=0 Jan 31 09:16:38 crc kubenswrapper[4732]: I0131 09:16:38.264421 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" event={"ID":"a8db73f4-a3fd-4276-87eb-69db3df2adb6","Type":"ContainerDied","Data":"a86707b4de5b35e9337d2beb5deb8d861dead56d2dc194d07926deea0b9a63f5"} Jan 31 09:16:38 crc kubenswrapper[4732]: I0131 09:16:38.264633 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" event={"ID":"a8db73f4-a3fd-4276-87eb-69db3df2adb6","Type":"ContainerStarted","Data":"77e123f88c883e88c388561aaf7732599ce5120cf4c9e11a5fc356e2e9d2a10f"} Jan 31 09:16:39 crc kubenswrapper[4732]: I0131 09:16:39.273324 4732 generic.go:334] "Generic (PLEG): container finished" podID="a8db73f4-a3fd-4276-87eb-69db3df2adb6" containerID="61b0a7c08542a13be8ba2b2cae47287416caa788b605c317245cad2aa213e84a" exitCode=0 Jan 31 09:16:39 crc kubenswrapper[4732]: I0131 09:16:39.273372 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" event={"ID":"a8db73f4-a3fd-4276-87eb-69db3df2adb6","Type":"ContainerDied","Data":"61b0a7c08542a13be8ba2b2cae47287416caa788b605c317245cad2aa213e84a"} Jan 31 09:16:40 crc kubenswrapper[4732]: I0131 09:16:40.283164 4732 generic.go:334] "Generic (PLEG): container finished" podID="a8db73f4-a3fd-4276-87eb-69db3df2adb6" containerID="7a57753b9625c7248d75214d3dccdb94e632617b9f7c21be3bc87c9177d4ca52" exitCode=0 Jan 31 09:16:40 crc kubenswrapper[4732]: I0131 09:16:40.283239 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" event={"ID":"a8db73f4-a3fd-4276-87eb-69db3df2adb6","Type":"ContainerDied","Data":"7a57753b9625c7248d75214d3dccdb94e632617b9f7c21be3bc87c9177d4ca52"} Jan 31 09:16:41 crc kubenswrapper[4732]: I0131 09:16:41.649635 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" Jan 31 09:16:41 crc kubenswrapper[4732]: I0131 09:16:41.837793 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x8fp\" (UniqueName: \"kubernetes.io/projected/a8db73f4-a3fd-4276-87eb-69db3df2adb6-kube-api-access-9x8fp\") pod \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\" (UID: \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\") " Jan 31 09:16:41 crc kubenswrapper[4732]: I0131 09:16:41.838218 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8db73f4-a3fd-4276-87eb-69db3df2adb6-bundle\") pod \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\" (UID: \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\") " Jan 31 09:16:41 crc kubenswrapper[4732]: I0131 09:16:41.838317 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8db73f4-a3fd-4276-87eb-69db3df2adb6-util\") pod \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\" (UID: \"a8db73f4-a3fd-4276-87eb-69db3df2adb6\") " Jan 31 09:16:41 crc kubenswrapper[4732]: I0131 09:16:41.839553 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8db73f4-a3fd-4276-87eb-69db3df2adb6-bundle" (OuterVolumeSpecName: "bundle") pod "a8db73f4-a3fd-4276-87eb-69db3df2adb6" (UID: "a8db73f4-a3fd-4276-87eb-69db3df2adb6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:16:41 crc kubenswrapper[4732]: I0131 09:16:41.847278 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8db73f4-a3fd-4276-87eb-69db3df2adb6-kube-api-access-9x8fp" (OuterVolumeSpecName: "kube-api-access-9x8fp") pod "a8db73f4-a3fd-4276-87eb-69db3df2adb6" (UID: "a8db73f4-a3fd-4276-87eb-69db3df2adb6"). InnerVolumeSpecName "kube-api-access-9x8fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:16:41 crc kubenswrapper[4732]: I0131 09:16:41.862044 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8db73f4-a3fd-4276-87eb-69db3df2adb6-util" (OuterVolumeSpecName: "util") pod "a8db73f4-a3fd-4276-87eb-69db3df2adb6" (UID: "a8db73f4-a3fd-4276-87eb-69db3df2adb6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:16:41 crc kubenswrapper[4732]: I0131 09:16:41.940146 4732 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8db73f4-a3fd-4276-87eb-69db3df2adb6-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:41 crc kubenswrapper[4732]: I0131 09:16:41.940181 4732 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8db73f4-a3fd-4276-87eb-69db3df2adb6-util\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:41 crc kubenswrapper[4732]: I0131 09:16:41.940190 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x8fp\" (UniqueName: \"kubernetes.io/projected/a8db73f4-a3fd-4276-87eb-69db3df2adb6-kube-api-access-9x8fp\") on node \"crc\" DevicePath \"\"" Jan 31 09:16:42 crc kubenswrapper[4732]: I0131 09:16:42.300761 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" event={"ID":"a8db73f4-a3fd-4276-87eb-69db3df2adb6","Type":"ContainerDied","Data":"77e123f88c883e88c388561aaf7732599ce5120cf4c9e11a5fc356e2e9d2a10f"} Jan 31 09:16:42 crc kubenswrapper[4732]: I0131 09:16:42.300808 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77e123f88c883e88c388561aaf7732599ce5120cf4c9e11a5fc356e2e9d2a10f" Jan 31 09:16:42 crc kubenswrapper[4732]: I0131 09:16:42.300807 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv" Jan 31 09:16:44 crc kubenswrapper[4732]: I0131 09:16:44.497621 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:44 crc kubenswrapper[4732]: I0131 09:16:44.517608 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:16:47 crc kubenswrapper[4732]: I0131 09:16:47.498177 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:16:47 crc kubenswrapper[4732]: I0131 09:16:47.498476 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:16:49 crc kubenswrapper[4732]: I0131 09:16:49.010417 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.581875 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc"] Jan 31 09:16:52 crc kubenswrapper[4732]: E0131 09:16:52.582597 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8db73f4-a3fd-4276-87eb-69db3df2adb6" containerName="util" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.582610 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8db73f4-a3fd-4276-87eb-69db3df2adb6" containerName="util" Jan 31 09:16:52 crc kubenswrapper[4732]: E0131 09:16:52.582622 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8db73f4-a3fd-4276-87eb-69db3df2adb6" containerName="extract" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.582627 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8db73f4-a3fd-4276-87eb-69db3df2adb6" containerName="extract" Jan 31 09:16:52 crc kubenswrapper[4732]: E0131 09:16:52.582650 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8db73f4-a3fd-4276-87eb-69db3df2adb6" containerName="pull" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.582656 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8db73f4-a3fd-4276-87eb-69db3df2adb6" containerName="pull" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.582802 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8db73f4-a3fd-4276-87eb-69db3df2adb6" containerName="extract" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.583218 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.586434 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-service-cert" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.586926 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zqjgk" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.601156 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc"] Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.609018 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17f93e90-1e9a-439c-a130-487ebf54ad10-apiservice-cert\") pod \"swift-operator-controller-manager-6786c8c4f8-x5zqc\" (UID: \"17f93e90-1e9a-439c-a130-487ebf54ad10\") " pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.609082 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17f93e90-1e9a-439c-a130-487ebf54ad10-webhook-cert\") pod \"swift-operator-controller-manager-6786c8c4f8-x5zqc\" (UID: \"17f93e90-1e9a-439c-a130-487ebf54ad10\") " pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.609115 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28txj\" (UniqueName: \"kubernetes.io/projected/17f93e90-1e9a-439c-a130-487ebf54ad10-kube-api-access-28txj\") pod \"swift-operator-controller-manager-6786c8c4f8-x5zqc\" (UID: \"17f93e90-1e9a-439c-a130-487ebf54ad10\") " pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.710608 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17f93e90-1e9a-439c-a130-487ebf54ad10-apiservice-cert\") pod \"swift-operator-controller-manager-6786c8c4f8-x5zqc\" (UID: \"17f93e90-1e9a-439c-a130-487ebf54ad10\") " pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.710908 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17f93e90-1e9a-439c-a130-487ebf54ad10-webhook-cert\") pod \"swift-operator-controller-manager-6786c8c4f8-x5zqc\" (UID: \"17f93e90-1e9a-439c-a130-487ebf54ad10\") " pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.711015 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28txj\" (UniqueName: \"kubernetes.io/projected/17f93e90-1e9a-439c-a130-487ebf54ad10-kube-api-access-28txj\") pod \"swift-operator-controller-manager-6786c8c4f8-x5zqc\" (UID: \"17f93e90-1e9a-439c-a130-487ebf54ad10\") " pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.716533 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17f93e90-1e9a-439c-a130-487ebf54ad10-apiservice-cert\") pod \"swift-operator-controller-manager-6786c8c4f8-x5zqc\" (UID: \"17f93e90-1e9a-439c-a130-487ebf54ad10\") " pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.729454 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28txj\" (UniqueName: \"kubernetes.io/projected/17f93e90-1e9a-439c-a130-487ebf54ad10-kube-api-access-28txj\") pod \"swift-operator-controller-manager-6786c8c4f8-x5zqc\" (UID: \"17f93e90-1e9a-439c-a130-487ebf54ad10\") " pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.732162 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17f93e90-1e9a-439c-a130-487ebf54ad10-webhook-cert\") pod \"swift-operator-controller-manager-6786c8c4f8-x5zqc\" (UID: \"17f93e90-1e9a-439c-a130-487ebf54ad10\") " pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:16:52 crc kubenswrapper[4732]: I0131 09:16:52.901097 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:16:53 crc kubenswrapper[4732]: I0131 09:16:53.337409 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc"] Jan 31 09:16:53 crc kubenswrapper[4732]: I0131 09:16:53.379320 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" event={"ID":"17f93e90-1e9a-439c-a130-487ebf54ad10","Type":"ContainerStarted","Data":"3190f0d353720c75142ebd0bfb06e439e4a2407802386eda349902b5c0a59659"} Jan 31 09:16:55 crc kubenswrapper[4732]: I0131 09:16:55.393170 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" event={"ID":"17f93e90-1e9a-439c-a130-487ebf54ad10","Type":"ContainerStarted","Data":"dc0300becefa39e137a97b82074b1aa44c6492acb1b746336056c96e36e23697"} Jan 31 09:16:55 crc kubenswrapper[4732]: I0131 09:16:55.393640 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:16:55 crc kubenswrapper[4732]: I0131 09:16:55.415355 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" podStartSLOduration=1.92508974 podStartE2EDuration="3.415334882s" podCreationTimestamp="2026-01-31 09:16:52 +0000 UTC" firstStartedPulling="2026-01-31 09:16:53.350261002 +0000 UTC m=+951.656137206" lastFinishedPulling="2026-01-31 09:16:54.840506144 +0000 UTC m=+953.146382348" observedRunningTime="2026-01-31 09:16:55.41401769 +0000 UTC m=+953.719893894" watchObservedRunningTime="2026-01-31 09:16:55.415334882 +0000 UTC m=+953.721211086" Jan 31 09:17:02 crc kubenswrapper[4732]: I0131 09:17:02.905392 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.258480 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.262899 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.265688 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.265756 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-54dnj" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.265994 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.266210 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.276806 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.414385 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt7ch\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-kube-api-access-pt7ch\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.414459 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.414514 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.414566 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/410ee08c-4c6c-4012-aa46-264179923617-lock\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.414604 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/410ee08c-4c6c-4012-aa46-264179923617-cache\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.515901 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.515965 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.516008 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/410ee08c-4c6c-4012-aa46-264179923617-lock\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.516034 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/410ee08c-4c6c-4012-aa46-264179923617-cache\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.516078 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt7ch\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-kube-api-access-pt7ch\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: E0131 09:17:07.516151 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:17:07 crc kubenswrapper[4732]: E0131 09:17:07.516202 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:17:07 crc kubenswrapper[4732]: E0131 09:17:07.516290 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift podName:410ee08c-4c6c-4012-aa46-264179923617 nodeName:}" failed. No retries permitted until 2026-01-31 09:17:08.01626953 +0000 UTC m=+966.322145794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift") pod "swift-storage-0" (UID: "410ee08c-4c6c-4012-aa46-264179923617") : configmap "swift-ring-files" not found Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.516381 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") device mount path \"/mnt/openstack/pv06\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.516523 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/410ee08c-4c6c-4012-aa46-264179923617-lock\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.516741 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/410ee08c-4c6c-4012-aa46-264179923617-cache\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.536068 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.545697 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt7ch\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-kube-api-access-pt7ch\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.802128 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-thntz"] Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.805043 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.810126 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.810222 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.810368 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.816266 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-thntz"] Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.921478 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/98f46fa4-b924-478d-a6d5-6070a4a75aee-ring-data-devices\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.921756 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/98f46fa4-b924-478d-a6d5-6070a4a75aee-etc-swift\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.921891 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/98f46fa4-b924-478d-a6d5-6070a4a75aee-swiftconf\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.921986 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98f46fa4-b924-478d-a6d5-6070a4a75aee-scripts\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.922085 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/98f46fa4-b924-478d-a6d5-6070a4a75aee-dispersionconf\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:07 crc kubenswrapper[4732]: I0131 09:17:07.922160 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtbpn\" (UniqueName: \"kubernetes.io/projected/98f46fa4-b924-478d-a6d5-6070a4a75aee-kube-api-access-dtbpn\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.023374 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/98f46fa4-b924-478d-a6d5-6070a4a75aee-ring-data-devices\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.023463 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/98f46fa4-b924-478d-a6d5-6070a4a75aee-etc-swift\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.023493 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/98f46fa4-b924-478d-a6d5-6070a4a75aee-swiftconf\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.023517 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98f46fa4-b924-478d-a6d5-6070a4a75aee-scripts\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.023548 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/98f46fa4-b924-478d-a6d5-6070a4a75aee-dispersionconf\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.023571 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtbpn\" (UniqueName: \"kubernetes.io/projected/98f46fa4-b924-478d-a6d5-6070a4a75aee-kube-api-access-dtbpn\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.023631 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:08 crc kubenswrapper[4732]: E0131 09:17:08.023798 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:17:08 crc kubenswrapper[4732]: E0131 09:17:08.023811 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:17:08 crc kubenswrapper[4732]: E0131 09:17:08.023852 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift podName:410ee08c-4c6c-4012-aa46-264179923617 nodeName:}" failed. No retries permitted until 2026-01-31 09:17:09.02383847 +0000 UTC m=+967.329714674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift") pod "swift-storage-0" (UID: "410ee08c-4c6c-4012-aa46-264179923617") : configmap "swift-ring-files" not found Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.024229 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/98f46fa4-b924-478d-a6d5-6070a4a75aee-ring-data-devices\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.024489 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/98f46fa4-b924-478d-a6d5-6070a4a75aee-etc-swift\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.025446 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98f46fa4-b924-478d-a6d5-6070a4a75aee-scripts\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.045228 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/98f46fa4-b924-478d-a6d5-6070a4a75aee-dispersionconf\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.050132 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/98f46fa4-b924-478d-a6d5-6070a4a75aee-swiftconf\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.058191 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtbpn\" (UniqueName: \"kubernetes.io/projected/98f46fa4-b924-478d-a6d5-6070a4a75aee-kube-api-access-dtbpn\") pod \"swift-ring-rebalance-thntz\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.127824 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.369789 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8"] Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.372323 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.377328 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8"] Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.531440 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e00e76d-2f89-454c-be3b-855e8186c78e-run-httpd\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.531505 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e00e76d-2f89-454c-be3b-855e8186c78e-log-httpd\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.531707 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k5xz\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-kube-api-access-5k5xz\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.531804 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.531892 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e00e76d-2f89-454c-be3b-855e8186c78e-config-data\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.585471 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-thntz"] Jan 31 09:17:08 crc kubenswrapper[4732]: W0131 09:17:08.590865 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98f46fa4_b924_478d_a6d5_6070a4a75aee.slice/crio-d4531d5f1fc9a03a2055376e4d89a22630b0cf87424acef5b26aef666e5742d5 WatchSource:0}: Error finding container d4531d5f1fc9a03a2055376e4d89a22630b0cf87424acef5b26aef666e5742d5: Status 404 returned error can't find the container with id d4531d5f1fc9a03a2055376e4d89a22630b0cf87424acef5b26aef666e5742d5 Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.633034 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e00e76d-2f89-454c-be3b-855e8186c78e-run-httpd\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.633281 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e00e76d-2f89-454c-be3b-855e8186c78e-log-httpd\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.633386 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k5xz\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-kube-api-access-5k5xz\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.633481 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.633569 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e00e76d-2f89-454c-be3b-855e8186c78e-run-httpd\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.633723 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e00e76d-2f89-454c-be3b-855e8186c78e-config-data\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: E0131 09:17:08.633750 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:17:08 crc kubenswrapper[4732]: E0131 09:17:08.633954 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8: configmap "swift-ring-files" not found Jan 31 09:17:08 crc kubenswrapper[4732]: E0131 09:17:08.634078 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift podName:9e00e76d-2f89-454c-be3b-855e8186c78e nodeName:}" failed. No retries permitted until 2026-01-31 09:17:09.134055907 +0000 UTC m=+967.439932111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift") pod "swift-proxy-7d8cf99555-f2jx8" (UID: "9e00e76d-2f89-454c-be3b-855e8186c78e") : configmap "swift-ring-files" not found Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.633986 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e00e76d-2f89-454c-be3b-855e8186c78e-log-httpd\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.642339 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e00e76d-2f89-454c-be3b-855e8186c78e-config-data\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:08 crc kubenswrapper[4732]: I0131 09:17:08.651773 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k5xz\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-kube-api-access-5k5xz\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:09 crc kubenswrapper[4732]: I0131 09:17:09.039791 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:09 crc kubenswrapper[4732]: E0131 09:17:09.040032 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:17:09 crc kubenswrapper[4732]: E0131 09:17:09.040051 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:17:09 crc kubenswrapper[4732]: E0131 09:17:09.040115 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift podName:410ee08c-4c6c-4012-aa46-264179923617 nodeName:}" failed. No retries permitted until 2026-01-31 09:17:11.040095615 +0000 UTC m=+969.345971819 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift") pod "swift-storage-0" (UID: "410ee08c-4c6c-4012-aa46-264179923617") : configmap "swift-ring-files" not found Jan 31 09:17:09 crc kubenswrapper[4732]: I0131 09:17:09.140961 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:09 crc kubenswrapper[4732]: E0131 09:17:09.141191 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:17:09 crc kubenswrapper[4732]: E0131 09:17:09.141388 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8: configmap "swift-ring-files" not found Jan 31 09:17:09 crc kubenswrapper[4732]: E0131 09:17:09.141448 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift podName:9e00e76d-2f89-454c-be3b-855e8186c78e nodeName:}" failed. No retries permitted until 2026-01-31 09:17:10.141430531 +0000 UTC m=+968.447306735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift") pod "swift-proxy-7d8cf99555-f2jx8" (UID: "9e00e76d-2f89-454c-be3b-855e8186c78e") : configmap "swift-ring-files" not found Jan 31 09:17:09 crc kubenswrapper[4732]: I0131 09:17:09.510332 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-thntz" event={"ID":"98f46fa4-b924-478d-a6d5-6070a4a75aee","Type":"ContainerStarted","Data":"d4531d5f1fc9a03a2055376e4d89a22630b0cf87424acef5b26aef666e5742d5"} Jan 31 09:17:10 crc kubenswrapper[4732]: I0131 09:17:10.155336 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:10 crc kubenswrapper[4732]: E0131 09:17:10.155480 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:17:10 crc kubenswrapper[4732]: E0131 09:17:10.155496 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8: configmap "swift-ring-files" not found Jan 31 09:17:10 crc kubenswrapper[4732]: E0131 09:17:10.155547 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift podName:9e00e76d-2f89-454c-be3b-855e8186c78e nodeName:}" failed. No retries permitted until 2026-01-31 09:17:12.155531488 +0000 UTC m=+970.461407692 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift") pod "swift-proxy-7d8cf99555-f2jx8" (UID: "9e00e76d-2f89-454c-be3b-855e8186c78e") : configmap "swift-ring-files" not found Jan 31 09:17:11 crc kubenswrapper[4732]: I0131 09:17:11.068350 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:11 crc kubenswrapper[4732]: E0131 09:17:11.068543 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:17:11 crc kubenswrapper[4732]: E0131 09:17:11.068854 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:17:11 crc kubenswrapper[4732]: E0131 09:17:11.068906 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift podName:410ee08c-4c6c-4012-aa46-264179923617 nodeName:}" failed. No retries permitted until 2026-01-31 09:17:15.068890986 +0000 UTC m=+973.374767190 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift") pod "swift-storage-0" (UID: "410ee08c-4c6c-4012-aa46-264179923617") : configmap "swift-ring-files" not found Jan 31 09:17:12 crc kubenswrapper[4732]: I0131 09:17:12.186262 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:12 crc kubenswrapper[4732]: E0131 09:17:12.187303 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:17:12 crc kubenswrapper[4732]: E0131 09:17:12.187336 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8: configmap "swift-ring-files" not found Jan 31 09:17:12 crc kubenswrapper[4732]: E0131 09:17:12.187419 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift podName:9e00e76d-2f89-454c-be3b-855e8186c78e nodeName:}" failed. No retries permitted until 2026-01-31 09:17:16.187390467 +0000 UTC m=+974.493266711 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift") pod "swift-proxy-7d8cf99555-f2jx8" (UID: "9e00e76d-2f89-454c-be3b-855e8186c78e") : configmap "swift-ring-files" not found Jan 31 09:17:12 crc kubenswrapper[4732]: I0131 09:17:12.536356 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-thntz" event={"ID":"98f46fa4-b924-478d-a6d5-6070a4a75aee","Type":"ContainerStarted","Data":"90c15d5ab54a0813e658db668ab02d84908650e855cd0db9d18d9e71000d2b80"} Jan 31 09:17:12 crc kubenswrapper[4732]: I0131 09:17:12.563052 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-thntz" podStartSLOduration=2.7230280430000002 podStartE2EDuration="5.563035202s" podCreationTimestamp="2026-01-31 09:17:07 +0000 UTC" firstStartedPulling="2026-01-31 09:17:08.598901925 +0000 UTC m=+966.904778129" lastFinishedPulling="2026-01-31 09:17:11.438909034 +0000 UTC m=+969.744785288" observedRunningTime="2026-01-31 09:17:12.558596801 +0000 UTC m=+970.864473045" watchObservedRunningTime="2026-01-31 09:17:12.563035202 +0000 UTC m=+970.868911406" Jan 31 09:17:15 crc kubenswrapper[4732]: I0131 09:17:15.133516 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:15 crc kubenswrapper[4732]: E0131 09:17:15.133824 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:17:15 crc kubenswrapper[4732]: E0131 09:17:15.134122 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:17:15 crc kubenswrapper[4732]: E0131 09:17:15.134213 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift podName:410ee08c-4c6c-4012-aa46-264179923617 nodeName:}" failed. No retries permitted until 2026-01-31 09:17:23.134183944 +0000 UTC m=+981.440060178 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift") pod "swift-storage-0" (UID: "410ee08c-4c6c-4012-aa46-264179923617") : configmap "swift-ring-files" not found Jan 31 09:17:16 crc kubenswrapper[4732]: I0131 09:17:16.251605 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:16 crc kubenswrapper[4732]: E0131 09:17:16.251827 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:17:16 crc kubenswrapper[4732]: E0131 09:17:16.251842 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8: configmap "swift-ring-files" not found Jan 31 09:17:16 crc kubenswrapper[4732]: E0131 09:17:16.251896 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift podName:9e00e76d-2f89-454c-be3b-855e8186c78e nodeName:}" failed. No retries permitted until 2026-01-31 09:17:24.251878124 +0000 UTC m=+982.557754328 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift") pod "swift-proxy-7d8cf99555-f2jx8" (UID: "9e00e76d-2f89-454c-be3b-855e8186c78e") : configmap "swift-ring-files" not found Jan 31 09:17:17 crc kubenswrapper[4732]: I0131 09:17:17.497723 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:17:17 crc kubenswrapper[4732]: I0131 09:17:17.498050 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:17:17 crc kubenswrapper[4732]: I0131 09:17:17.498098 4732 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:17:17 crc kubenswrapper[4732]: I0131 09:17:17.498721 4732 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7aa4af54c816ede2288939b5eacffccc23edb9afc7e2a36ef42fe01d52b4ae91"} pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:17:17 crc kubenswrapper[4732]: I0131 09:17:17.498777 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" containerID="cri-o://7aa4af54c816ede2288939b5eacffccc23edb9afc7e2a36ef42fe01d52b4ae91" gracePeriod=600 Jan 31 09:17:18 crc kubenswrapper[4732]: I0131 09:17:18.586163 4732 generic.go:334] "Generic (PLEG): container finished" podID="98f46fa4-b924-478d-a6d5-6070a4a75aee" containerID="90c15d5ab54a0813e658db668ab02d84908650e855cd0db9d18d9e71000d2b80" exitCode=0 Jan 31 09:17:18 crc kubenswrapper[4732]: I0131 09:17:18.586242 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-thntz" event={"ID":"98f46fa4-b924-478d-a6d5-6070a4a75aee","Type":"ContainerDied","Data":"90c15d5ab54a0813e658db668ab02d84908650e855cd0db9d18d9e71000d2b80"} Jan 31 09:17:18 crc kubenswrapper[4732]: I0131 09:17:18.597768 4732 generic.go:334] "Generic (PLEG): container finished" podID="7d790207-d357-4b47-87bf-5b505e061820" containerID="7aa4af54c816ede2288939b5eacffccc23edb9afc7e2a36ef42fe01d52b4ae91" exitCode=0 Jan 31 09:17:18 crc kubenswrapper[4732]: I0131 09:17:18.597822 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerDied","Data":"7aa4af54c816ede2288939b5eacffccc23edb9afc7e2a36ef42fe01d52b4ae91"} Jan 31 09:17:18 crc kubenswrapper[4732]: I0131 09:17:18.597855 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerStarted","Data":"99271a603de3a603b9be8d8f0bb791de0f202646de27403a5e3efc59790f637f"} Jan 31 09:17:18 crc kubenswrapper[4732]: I0131 09:17:18.597872 4732 scope.go:117] "RemoveContainer" containerID="e8d3fd1eb561cfd678a2a0df1de54d984c12dc8e05f74e816b693d4b18b74a20" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.000100 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.030102 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/98f46fa4-b924-478d-a6d5-6070a4a75aee-etc-swift\") pod \"98f46fa4-b924-478d-a6d5-6070a4a75aee\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.030164 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/98f46fa4-b924-478d-a6d5-6070a4a75aee-dispersionconf\") pod \"98f46fa4-b924-478d-a6d5-6070a4a75aee\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.030259 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/98f46fa4-b924-478d-a6d5-6070a4a75aee-swiftconf\") pod \"98f46fa4-b924-478d-a6d5-6070a4a75aee\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.030295 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98f46fa4-b924-478d-a6d5-6070a4a75aee-scripts\") pod \"98f46fa4-b924-478d-a6d5-6070a4a75aee\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.030318 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/98f46fa4-b924-478d-a6d5-6070a4a75aee-ring-data-devices\") pod \"98f46fa4-b924-478d-a6d5-6070a4a75aee\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.030353 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtbpn\" (UniqueName: \"kubernetes.io/projected/98f46fa4-b924-478d-a6d5-6070a4a75aee-kube-api-access-dtbpn\") pod \"98f46fa4-b924-478d-a6d5-6070a4a75aee\" (UID: \"98f46fa4-b924-478d-a6d5-6070a4a75aee\") " Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.032708 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98f46fa4-b924-478d-a6d5-6070a4a75aee-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "98f46fa4-b924-478d-a6d5-6070a4a75aee" (UID: "98f46fa4-b924-478d-a6d5-6070a4a75aee"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.036586 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f46fa4-b924-478d-a6d5-6070a4a75aee-kube-api-access-dtbpn" (OuterVolumeSpecName: "kube-api-access-dtbpn") pod "98f46fa4-b924-478d-a6d5-6070a4a75aee" (UID: "98f46fa4-b924-478d-a6d5-6070a4a75aee"). InnerVolumeSpecName "kube-api-access-dtbpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.033175 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f46fa4-b924-478d-a6d5-6070a4a75aee-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "98f46fa4-b924-478d-a6d5-6070a4a75aee" (UID: "98f46fa4-b924-478d-a6d5-6070a4a75aee"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.051205 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98f46fa4-b924-478d-a6d5-6070a4a75aee-scripts" (OuterVolumeSpecName: "scripts") pod "98f46fa4-b924-478d-a6d5-6070a4a75aee" (UID: "98f46fa4-b924-478d-a6d5-6070a4a75aee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.054195 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f46fa4-b924-478d-a6d5-6070a4a75aee-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "98f46fa4-b924-478d-a6d5-6070a4a75aee" (UID: "98f46fa4-b924-478d-a6d5-6070a4a75aee"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.060818 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f46fa4-b924-478d-a6d5-6070a4a75aee-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "98f46fa4-b924-478d-a6d5-6070a4a75aee" (UID: "98f46fa4-b924-478d-a6d5-6070a4a75aee"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.132078 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/98f46fa4-b924-478d-a6d5-6070a4a75aee-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.132107 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/98f46fa4-b924-478d-a6d5-6070a4a75aee-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.132117 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98f46fa4-b924-478d-a6d5-6070a4a75aee-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.132125 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/98f46fa4-b924-478d-a6d5-6070a4a75aee-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.132134 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtbpn\" (UniqueName: \"kubernetes.io/projected/98f46fa4-b924-478d-a6d5-6070a4a75aee-kube-api-access-dtbpn\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.132145 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/98f46fa4-b924-478d-a6d5-6070a4a75aee-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.616716 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-thntz" event={"ID":"98f46fa4-b924-478d-a6d5-6070a4a75aee","Type":"ContainerDied","Data":"d4531d5f1fc9a03a2055376e4d89a22630b0cf87424acef5b26aef666e5742d5"} Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.616775 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4531d5f1fc9a03a2055376e4d89a22630b0cf87424acef5b26aef666e5742d5" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.616739 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-thntz" Jan 31 09:17:20 crc kubenswrapper[4732]: I0131 09:17:20.864692 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-thntz_98f46fa4-b924-478d-a6d5-6070a4a75aee/swift-ring-rebalance/0.log" Jan 31 09:17:22 crc kubenswrapper[4732]: I0131 09:17:22.433846 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-thntz_98f46fa4-b924-478d-a6d5-6070a4a75aee/swift-ring-rebalance/0.log" Jan 31 09:17:23 crc kubenswrapper[4732]: I0131 09:17:23.173155 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:23 crc kubenswrapper[4732]: I0131 09:17:23.180196 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift\") pod \"swift-storage-0\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:23 crc kubenswrapper[4732]: I0131 09:17:23.478198 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:17:24 crc kubenswrapper[4732]: I0131 09:17:23.999892 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-thntz_98f46fa4-b924-478d-a6d5-6070a4a75aee/swift-ring-rebalance/0.log" Jan 31 09:17:24 crc kubenswrapper[4732]: I0131 09:17:24.005843 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:17:24 crc kubenswrapper[4732]: I0131 09:17:24.294478 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:24 crc kubenswrapper[4732]: I0131 09:17:24.301069 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift\") pod \"swift-proxy-7d8cf99555-f2jx8\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:24 crc kubenswrapper[4732]: I0131 09:17:24.587845 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:24 crc kubenswrapper[4732]: I0131 09:17:24.669798 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"bd56bd5520be61343a090a3833bf69ab0722a78470f84b63a8f6a8b06d85cd3e"} Jan 31 09:17:25 crc kubenswrapper[4732]: I0131 09:17:25.063483 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8"] Jan 31 09:17:25 crc kubenswrapper[4732]: I0131 09:17:25.540216 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-thntz_98f46fa4-b924-478d-a6d5-6070a4a75aee/swift-ring-rebalance/0.log" Jan 31 09:17:25 crc kubenswrapper[4732]: I0131 09:17:25.690292 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" event={"ID":"9e00e76d-2f89-454c-be3b-855e8186c78e","Type":"ContainerStarted","Data":"9fc5b88fc650585c4cf7595095cbcdfc58d2b17c1bb6d6f6621c5213c78ab1f0"} Jan 31 09:17:25 crc kubenswrapper[4732]: I0131 09:17:25.690377 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" event={"ID":"9e00e76d-2f89-454c-be3b-855e8186c78e","Type":"ContainerStarted","Data":"5603cb6f58751f09a186156f4f600f7ee6585d6eeac0b6ca57c68aa2fa8a4f68"} Jan 31 09:17:25 crc kubenswrapper[4732]: I0131 09:17:25.690409 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" event={"ID":"9e00e76d-2f89-454c-be3b-855e8186c78e","Type":"ContainerStarted","Data":"70a5df12fcc55bf7e0349357dc9e7a70341a9c65f4cf5241e077476ae04d6820"} Jan 31 09:17:25 crc kubenswrapper[4732]: I0131 09:17:25.691820 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:25 crc kubenswrapper[4732]: I0131 09:17:25.691871 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:25 crc kubenswrapper[4732]: I0131 09:17:25.697153 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"dbfd2c11334695f732524b533be449dd61ebf7d7637f32b3bcfe1d1941b37863"} Jan 31 09:17:25 crc kubenswrapper[4732]: I0131 09:17:25.697195 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"b63fa3de54dc547aaba5e3c717d85f707b81c7144e50e21b9e827db634951724"} Jan 31 09:17:25 crc kubenswrapper[4732]: I0131 09:17:25.697211 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"61cb3b1d16a1d1d363ee3738835e89a0f2f6d3986bf10918c00e484f7f8c9ef7"} Jan 31 09:17:25 crc kubenswrapper[4732]: I0131 09:17:25.716741 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" podStartSLOduration=17.716657599 podStartE2EDuration="17.716657599s" podCreationTimestamp="2026-01-31 09:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:17:25.709561409 +0000 UTC m=+984.015437633" watchObservedRunningTime="2026-01-31 09:17:25.716657599 +0000 UTC m=+984.022533843" Jan 31 09:17:26 crc kubenswrapper[4732]: I0131 09:17:26.705256 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"7f58b9ade96eae3a4c9b405f187c769232309e0c1fb241bf5444c4c304ab649f"} Jan 31 09:17:26 crc kubenswrapper[4732]: I0131 09:17:26.705569 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"6a55e95bf79b05700a43bb7800d35ec7f862e81feb1caac0721521392f5f8e7f"} Jan 31 09:17:27 crc kubenswrapper[4732]: I0131 09:17:27.134828 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-thntz_98f46fa4-b924-478d-a6d5-6070a4a75aee/swift-ring-rebalance/0.log" Jan 31 09:17:27 crc kubenswrapper[4732]: I0131 09:17:27.717623 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"20b5017b8e043011a79ad2661ea73a50a94d52bc115728a4ab154b985c7430df"} Jan 31 09:17:27 crc kubenswrapper[4732]: I0131 09:17:27.717709 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"73d8c40ae242fa0366cc532a47eefd15e47b7f5aeeb79eb1c8113e496beaf6e2"} Jan 31 09:17:27 crc kubenswrapper[4732]: I0131 09:17:27.717729 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"2ab9eec16ab6fbc15e432a8ef644dbb519f0a4dd256590fb6f9676ffceba4611"} Jan 31 09:17:28 crc kubenswrapper[4732]: I0131 09:17:28.731721 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"5e6d65fa1c26ce7ebf7ff25c277b074ca08845044a94dbdd6b32ff300ac68b7e"} Jan 31 09:17:28 crc kubenswrapper[4732]: I0131 09:17:28.732003 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"b045d34930a290c72208028dc6d1d6b3535e4b8818d51998647df06b3f7c2bdc"} Jan 31 09:17:28 crc kubenswrapper[4732]: I0131 09:17:28.732016 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"288678dcae1157f1c7ebe15c590533bc3accdd196dbe20808122f33180c2092e"} Jan 31 09:17:28 crc kubenswrapper[4732]: I0131 09:17:28.732025 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"7907616b4b6ec30fa97533949de2bee3dfcee1ddac0b222cb83f6c75835a1755"} Jan 31 09:17:28 crc kubenswrapper[4732]: I0131 09:17:28.745346 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-thntz_98f46fa4-b924-478d-a6d5-6070a4a75aee/swift-ring-rebalance/0.log" Jan 31 09:17:29 crc kubenswrapper[4732]: I0131 09:17:29.748017 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"81f782a2d54156b4990bcbb2dab091aeb970934a10ac85e7abc67c813255ffee"} Jan 31 09:17:29 crc kubenswrapper[4732]: I0131 09:17:29.748060 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"db1b9074ef0795f714675af65f50c4b87204369e4a7ad86e330547b5ab8f05fb"} Jan 31 09:17:29 crc kubenswrapper[4732]: I0131 09:17:29.748070 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerStarted","Data":"7f3c24b1866708a685d278257ae506a6fd79ff0289067bb9f41d1ad991336cc6"} Jan 31 09:17:29 crc kubenswrapper[4732]: I0131 09:17:29.787436 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=19.813508143 podStartE2EDuration="23.787415409s" podCreationTimestamp="2026-01-31 09:17:06 +0000 UTC" firstStartedPulling="2026-01-31 09:17:24.006800774 +0000 UTC m=+982.312676978" lastFinishedPulling="2026-01-31 09:17:27.98070804 +0000 UTC m=+986.286584244" observedRunningTime="2026-01-31 09:17:29.78136795 +0000 UTC m=+988.087244194" watchObservedRunningTime="2026-01-31 09:17:29.787415409 +0000 UTC m=+988.093291613" Jan 31 09:17:30 crc kubenswrapper[4732]: I0131 09:17:30.278365 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-thntz_98f46fa4-b924-478d-a6d5-6070a4a75aee/swift-ring-rebalance/0.log" Jan 31 09:17:31 crc kubenswrapper[4732]: I0131 09:17:31.817340 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-thntz_98f46fa4-b924-478d-a6d5-6070a4a75aee/swift-ring-rebalance/0.log" Jan 31 09:17:33 crc kubenswrapper[4732]: I0131 09:17:33.424969 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-thntz_98f46fa4-b924-478d-a6d5-6070a4a75aee/swift-ring-rebalance/0.log" Jan 31 09:17:34 crc kubenswrapper[4732]: I0131 09:17:34.590230 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:34 crc kubenswrapper[4732]: I0131 09:17:34.591693 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:17:34 crc kubenswrapper[4732]: I0131 09:17:34.979111 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-thntz_98f46fa4-b924-478d-a6d5-6070a4a75aee/swift-ring-rebalance/0.log" Jan 31 09:17:36 crc kubenswrapper[4732]: I0131 09:17:36.538219 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-thntz_98f46fa4-b924-478d-a6d5-6070a4a75aee/swift-ring-rebalance/0.log" Jan 31 09:17:37 crc kubenswrapper[4732]: I0131 09:17:37.904122 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 09:17:37 crc kubenswrapper[4732]: E0131 09:17:37.904514 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f46fa4-b924-478d-a6d5-6070a4a75aee" containerName="swift-ring-rebalance" Jan 31 09:17:37 crc kubenswrapper[4732]: I0131 09:17:37.904529 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f46fa4-b924-478d-a6d5-6070a4a75aee" containerName="swift-ring-rebalance" Jan 31 09:17:37 crc kubenswrapper[4732]: I0131 09:17:37.904741 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f46fa4-b924-478d-a6d5-6070a4a75aee" containerName="swift-ring-rebalance" Jan 31 09:17:37 crc kubenswrapper[4732]: I0131 09:17:37.908718 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:37 crc kubenswrapper[4732]: I0131 09:17:37.910161 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 09:17:37 crc kubenswrapper[4732]: I0131 09:17:37.915453 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:37 crc kubenswrapper[4732]: I0131 09:17:37.924464 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 09:17:37 crc kubenswrapper[4732]: I0131 09:17:37.934047 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.023924 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.024012 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/485e2c17-77f1-4b13-ad2a-1afe1034b82e-etc-swift\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.024049 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gtpt\" (UniqueName: \"kubernetes.io/projected/20196d3e-600c-4a25-97ef-86f81bfae43b-kube-api-access-5gtpt\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.024066 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/20196d3e-600c-4a25-97ef-86f81bfae43b-cache\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.024083 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/20196d3e-600c-4a25-97ef-86f81bfae43b-etc-swift\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.024096 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/20196d3e-600c-4a25-97ef-86f81bfae43b-lock\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.024120 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/485e2c17-77f1-4b13-ad2a-1afe1034b82e-lock\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.024160 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/485e2c17-77f1-4b13-ad2a-1afe1034b82e-cache\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.024184 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqsrn\" (UniqueName: \"kubernetes.io/projected/485e2c17-77f1-4b13-ad2a-1afe1034b82e-kube-api-access-kqsrn\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.024222 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.125989 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.126044 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/485e2c17-77f1-4b13-ad2a-1afe1034b82e-etc-swift\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.126085 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gtpt\" (UniqueName: \"kubernetes.io/projected/20196d3e-600c-4a25-97ef-86f81bfae43b-kube-api-access-5gtpt\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.126121 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/20196d3e-600c-4a25-97ef-86f81bfae43b-cache\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.126142 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/20196d3e-600c-4a25-97ef-86f81bfae43b-etc-swift\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.126159 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/20196d3e-600c-4a25-97ef-86f81bfae43b-lock\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.126190 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/485e2c17-77f1-4b13-ad2a-1afe1034b82e-lock\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.126232 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/485e2c17-77f1-4b13-ad2a-1afe1034b82e-cache\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.126260 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqsrn\" (UniqueName: \"kubernetes.io/projected/485e2c17-77f1-4b13-ad2a-1afe1034b82e-kube-api-access-kqsrn\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.126304 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.126423 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") device mount path \"/mnt/openstack/pv03\"" pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.126502 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") device mount path \"/mnt/openstack/pv10\"" pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.127241 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/20196d3e-600c-4a25-97ef-86f81bfae43b-lock\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.127242 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/485e2c17-77f1-4b13-ad2a-1afe1034b82e-cache\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.127263 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/20196d3e-600c-4a25-97ef-86f81bfae43b-cache\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.127309 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/485e2c17-77f1-4b13-ad2a-1afe1034b82e-lock\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.145690 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/485e2c17-77f1-4b13-ad2a-1afe1034b82e-etc-swift\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.145971 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/20196d3e-600c-4a25-97ef-86f81bfae43b-etc-swift\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.150023 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqsrn\" (UniqueName: \"kubernetes.io/projected/485e2c17-77f1-4b13-ad2a-1afe1034b82e-kube-api-access-kqsrn\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.150149 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gtpt\" (UniqueName: \"kubernetes.io/projected/20196d3e-600c-4a25-97ef-86f81bfae43b-kube-api-access-5gtpt\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.150993 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-1\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.154886 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-2\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.248789 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.260069 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.726330 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 09:17:38 crc kubenswrapper[4732]: W0131 09:17:38.735063 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod485e2c17_77f1_4b13_ad2a_1afe1034b82e.slice/crio-74e38e927b7f9f5222a587afffa38799ef87d240853e804c1d4e908457454209 WatchSource:0}: Error finding container 74e38e927b7f9f5222a587afffa38799ef87d240853e804c1d4e908457454209: Status 404 returned error can't find the container with id 74e38e927b7f9f5222a587afffa38799ef87d240853e804c1d4e908457454209 Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.803071 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 09:17:38 crc kubenswrapper[4732]: W0131 09:17:38.804500 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20196d3e_600c_4a25_97ef_86f81bfae43b.slice/crio-87f5d0b315407f4d9f600594a48f694f95c4330a184853c9d3591930eb2df4a1 WatchSource:0}: Error finding container 87f5d0b315407f4d9f600594a48f694f95c4330a184853c9d3591930eb2df4a1: Status 404 returned error can't find the container with id 87f5d0b315407f4d9f600594a48f694f95c4330a184853c9d3591930eb2df4a1 Jan 31 09:17:38 crc kubenswrapper[4732]: I0131 09:17:38.810117 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"74e38e927b7f9f5222a587afffa38799ef87d240853e804c1d4e908457454209"} Jan 31 09:17:39 crc kubenswrapper[4732]: I0131 09:17:39.820697 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"3ca9c58e45467d768a6b9a7ce40fa6bc0ba4cec3710eae69487dc0106ba42c67"} Jan 31 09:17:39 crc kubenswrapper[4732]: I0131 09:17:39.821069 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"f6d112eeaf369a7cae9a04806380ef040d7989006ab740a5b0d945ef5f7f317b"} Jan 31 09:17:39 crc kubenswrapper[4732]: I0131 09:17:39.821085 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"5d118436256aaad4b93c230d8fa0456e8cc6f3c5ca6ce4c9d7f30f8db3c450ff"} Jan 31 09:17:39 crc kubenswrapper[4732]: I0131 09:17:39.821098 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"bcb9d265c1a8c03fc26ba3b2f05fa7fcceb6cc59b711a2e0dd620cf846db5469"} Jan 31 09:17:39 crc kubenswrapper[4732]: I0131 09:17:39.821112 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"f47b66d5b1e0a89ad9b955fbf6dde3f0b21e522e86e33f379a1e2379b7919989"} Jan 31 09:17:39 crc kubenswrapper[4732]: I0131 09:17:39.821123 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"87f5d0b315407f4d9f600594a48f694f95c4330a184853c9d3591930eb2df4a1"} Jan 31 09:17:39 crc kubenswrapper[4732]: I0131 09:17:39.825858 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"0630d2ca7e09803aac756e9568f7669b86f328b502c8fbec24c1ab333009da4d"} Jan 31 09:17:39 crc kubenswrapper[4732]: I0131 09:17:39.825907 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"f72f8dedea2940a6aab6b6081a6ac32f4fe8a817a636c7292b3740e840aacaf8"} Jan 31 09:17:39 crc kubenswrapper[4732]: I0131 09:17:39.825921 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"069a866bbc2edc4e0416ac1351e656e79b93160d557be633a57aa05933c9d37a"} Jan 31 09:17:39 crc kubenswrapper[4732]: I0131 09:17:39.825934 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"0eb6a32653dab2879fd0e39ad42f712d86e6b8054d3507176f2d9dfda652811b"} Jan 31 09:17:39 crc kubenswrapper[4732]: I0131 09:17:39.825945 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"ef329846cba5f043851b0fac85462ce13c21ac2e48891f331c0b61745c33b6de"} Jan 31 09:17:40 crc kubenswrapper[4732]: I0131 09:17:40.842145 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"0939cdb203f97f24dcd9e0d0769010bdb9148423fa723be6da835b8c3fcd94a3"} Jan 31 09:17:40 crc kubenswrapper[4732]: I0131 09:17:40.842467 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"f59d6e8e79e5f65eaab0514cd4a4f2da1508f44198f23637971d0e31f2b0f2be"} Jan 31 09:17:40 crc kubenswrapper[4732]: I0131 09:17:40.842478 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"6c6f43f8261cf2448edf43a43564636ba04ed0b128e0df16f2584d3972a973a9"} Jan 31 09:17:40 crc kubenswrapper[4732]: I0131 09:17:40.842487 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"b26aaf7029934891de5d0e92f8463ca1f8e106c401d898590351656c24bbc6e8"} Jan 31 09:17:40 crc kubenswrapper[4732]: I0131 09:17:40.842495 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"ecfb0c165d8f537d03e0e64cf12e24a1575f070da3670ea4769b182ee59dcc20"} Jan 31 09:17:40 crc kubenswrapper[4732]: I0131 09:17:40.847881 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"23a3dbdcb5e8c0d9b5e2b66e270ff60b28361fccb19ac39e68496d454544c773"} Jan 31 09:17:40 crc kubenswrapper[4732]: I0131 09:17:40.847920 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"95aedeb15c9b618726f462f7211b9b97027622f3a2b198dc9717394e81868db5"} Jan 31 09:17:40 crc kubenswrapper[4732]: I0131 09:17:40.847931 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"dcca573b69212cf173983d6090be0a12978f5decb7c01cbd3779f1d4d2a6b0d0"} Jan 31 09:17:40 crc kubenswrapper[4732]: I0131 09:17:40.847942 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"de878ffff4963739e745def44cb56f453584bcd3f75e8cb3d9b85f0ca8d4e512"} Jan 31 09:17:40 crc kubenswrapper[4732]: I0131 09:17:40.847951 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"750de579c99e37a98c19df2a11fee436a72f515c72747a3e91a0abc97cd07385"} Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.852227 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-thntz"] Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.858254 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-thntz"] Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.864431 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"07ba04fcdc461974d0c6b09ac9da6b0c6610f36749f50f5eb47e063be68f5291"} Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.864648 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"3791aa9fdd8ee5b8255c509c99988f621034603e232b3c13e0287a092538c4df"} Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.864747 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"bd9e9b782c4457e3bdc1029eec72a89f862eb48d009287030b3d15352e63d945"} Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.864816 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"aba53546aab560ce992f9ed9d7c70a191eee00bcb246cc44c3bcf9dc6c6a964d"} Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.864885 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerStarted","Data":"87b04e76193dc458b5e96db200183f64f7cda5171af7bf59a9b3b47ca7c00582"} Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.869637 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"29d6915d3be45a209958c78f32e6cbfa5546c65adc3eb566381972012beea587"} Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.869705 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"2038dec3545765c898c51198c887beb686d603d6257cd634255dc773c64b0920"} Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.869724 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"171dd93dab917d5e3ad3010e206ac5f2aa6f0312273716ac344a73726703fbda"} Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.869739 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"1839639d5b9952c70ff36e7c3b972e786bbfb15314211dc0fe15a6700ac4d580"} Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.869750 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerStarted","Data":"17809bd753b75791dd2fa65451f79147382e0f41d6052d606f342c4037233a9a"} Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.875128 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-pdbq2"] Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.876006 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.877692 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.878311 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.889363 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-pdbq2"] Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.923472 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-2" podStartSLOduration=5.923452954 podStartE2EDuration="5.923452954s" podCreationTimestamp="2026-01-31 09:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:17:41.922530576 +0000 UTC m=+1000.228406790" watchObservedRunningTime="2026-01-31 09:17:41.923452954 +0000 UTC m=+1000.229329158" Jan 31 09:17:41 crc kubenswrapper[4732]: I0131 09:17:41.969032 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-1" podStartSLOduration=5.969011742 podStartE2EDuration="5.969011742s" podCreationTimestamp="2026-01-31 09:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:17:41.962245681 +0000 UTC m=+1000.268121895" watchObservedRunningTime="2026-01-31 09:17:41.969011742 +0000 UTC m=+1000.274887946" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.029563 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78c16da3-2938-49d9-b36d-3d71fe0d48f3-scripts\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.029625 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/78c16da3-2938-49d9-b36d-3d71fe0d48f3-ring-data-devices\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.029947 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4ml7\" (UniqueName: \"kubernetes.io/projected/78c16da3-2938-49d9-b36d-3d71fe0d48f3-kube-api-access-j4ml7\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.030128 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/78c16da3-2938-49d9-b36d-3d71fe0d48f3-swiftconf\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.030223 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/78c16da3-2938-49d9-b36d-3d71fe0d48f3-etc-swift\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.030338 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/78c16da3-2938-49d9-b36d-3d71fe0d48f3-dispersionconf\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.132229 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78c16da3-2938-49d9-b36d-3d71fe0d48f3-scripts\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.132285 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/78c16da3-2938-49d9-b36d-3d71fe0d48f3-ring-data-devices\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.132326 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4ml7\" (UniqueName: \"kubernetes.io/projected/78c16da3-2938-49d9-b36d-3d71fe0d48f3-kube-api-access-j4ml7\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.132383 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/78c16da3-2938-49d9-b36d-3d71fe0d48f3-swiftconf\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.132422 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/78c16da3-2938-49d9-b36d-3d71fe0d48f3-etc-swift\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.132451 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/78c16da3-2938-49d9-b36d-3d71fe0d48f3-dispersionconf\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.133362 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78c16da3-2938-49d9-b36d-3d71fe0d48f3-scripts\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.133645 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/78c16da3-2938-49d9-b36d-3d71fe0d48f3-ring-data-devices\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.133912 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/78c16da3-2938-49d9-b36d-3d71fe0d48f3-etc-swift\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.138742 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/78c16da3-2938-49d9-b36d-3d71fe0d48f3-dispersionconf\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.138756 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/78c16da3-2938-49d9-b36d-3d71fe0d48f3-swiftconf\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.161851 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4ml7\" (UniqueName: \"kubernetes.io/projected/78c16da3-2938-49d9-b36d-3d71fe0d48f3-kube-api-access-j4ml7\") pod \"swift-ring-rebalance-pdbq2\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.201299 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.406571 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-pdbq2"] Jan 31 09:17:42 crc kubenswrapper[4732]: W0131 09:17:42.409736 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c16da3_2938_49d9_b36d_3d71fe0d48f3.slice/crio-5d9c9f7c690ec50a833bec1f866ba679d131888630ee0174c0ab8f67646dad3c WatchSource:0}: Error finding container 5d9c9f7c690ec50a833bec1f866ba679d131888630ee0174c0ab8f67646dad3c: Status 404 returned error can't find the container with id 5d9c9f7c690ec50a833bec1f866ba679d131888630ee0174c0ab8f67646dad3c Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.561894 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f46fa4-b924-478d-a6d5-6070a4a75aee" path="/var/lib/kubelet/pods/98f46fa4-b924-478d-a6d5-6070a4a75aee/volumes" Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.886074 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" event={"ID":"78c16da3-2938-49d9-b36d-3d71fe0d48f3","Type":"ContainerStarted","Data":"34e2cf0f4161fbb85eb0df7049a2e4f5c156fd2607d987c6bef858acc9fffa46"} Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.886613 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" event={"ID":"78c16da3-2938-49d9-b36d-3d71fe0d48f3","Type":"ContainerStarted","Data":"5d9c9f7c690ec50a833bec1f866ba679d131888630ee0174c0ab8f67646dad3c"} Jan 31 09:17:42 crc kubenswrapper[4732]: I0131 09:17:42.925863 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" podStartSLOduration=1.9258382059999999 podStartE2EDuration="1.925838206s" podCreationTimestamp="2026-01-31 09:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:17:42.911420707 +0000 UTC m=+1001.217296961" watchObservedRunningTime="2026-01-31 09:17:42.925838206 +0000 UTC m=+1001.231714450" Jan 31 09:17:50 crc kubenswrapper[4732]: I0131 09:17:50.955173 4732 generic.go:334] "Generic (PLEG): container finished" podID="78c16da3-2938-49d9-b36d-3d71fe0d48f3" containerID="34e2cf0f4161fbb85eb0df7049a2e4f5c156fd2607d987c6bef858acc9fffa46" exitCode=0 Jan 31 09:17:50 crc kubenswrapper[4732]: I0131 09:17:50.955764 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" event={"ID":"78c16da3-2938-49d9-b36d-3d71fe0d48f3","Type":"ContainerDied","Data":"34e2cf0f4161fbb85eb0df7049a2e4f5c156fd2607d987c6bef858acc9fffa46"} Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.292298 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.388791 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78c16da3-2938-49d9-b36d-3d71fe0d48f3-scripts\") pod \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.388864 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/78c16da3-2938-49d9-b36d-3d71fe0d48f3-dispersionconf\") pod \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.388894 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/78c16da3-2938-49d9-b36d-3d71fe0d48f3-ring-data-devices\") pod \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.388925 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/78c16da3-2938-49d9-b36d-3d71fe0d48f3-etc-swift\") pod \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.389253 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/78c16da3-2938-49d9-b36d-3d71fe0d48f3-swiftconf\") pod \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.389285 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4ml7\" (UniqueName: \"kubernetes.io/projected/78c16da3-2938-49d9-b36d-3d71fe0d48f3-kube-api-access-j4ml7\") pod \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\" (UID: \"78c16da3-2938-49d9-b36d-3d71fe0d48f3\") " Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.390637 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c16da3-2938-49d9-b36d-3d71fe0d48f3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "78c16da3-2938-49d9-b36d-3d71fe0d48f3" (UID: "78c16da3-2938-49d9-b36d-3d71fe0d48f3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.391649 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78c16da3-2938-49d9-b36d-3d71fe0d48f3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "78c16da3-2938-49d9-b36d-3d71fe0d48f3" (UID: "78c16da3-2938-49d9-b36d-3d71fe0d48f3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.405103 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c16da3-2938-49d9-b36d-3d71fe0d48f3-kube-api-access-j4ml7" (OuterVolumeSpecName: "kube-api-access-j4ml7") pod "78c16da3-2938-49d9-b36d-3d71fe0d48f3" (UID: "78c16da3-2938-49d9-b36d-3d71fe0d48f3"). InnerVolumeSpecName "kube-api-access-j4ml7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.417910 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c16da3-2938-49d9-b36d-3d71fe0d48f3-scripts" (OuterVolumeSpecName: "scripts") pod "78c16da3-2938-49d9-b36d-3d71fe0d48f3" (UID: "78c16da3-2938-49d9-b36d-3d71fe0d48f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.423850 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c16da3-2938-49d9-b36d-3d71fe0d48f3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "78c16da3-2938-49d9-b36d-3d71fe0d48f3" (UID: "78c16da3-2938-49d9-b36d-3d71fe0d48f3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.426219 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c16da3-2938-49d9-b36d-3d71fe0d48f3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "78c16da3-2938-49d9-b36d-3d71fe0d48f3" (UID: "78c16da3-2938-49d9-b36d-3d71fe0d48f3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.491408 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/78c16da3-2938-49d9-b36d-3d71fe0d48f3-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.491460 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/78c16da3-2938-49d9-b36d-3d71fe0d48f3-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.491471 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/78c16da3-2938-49d9-b36d-3d71fe0d48f3-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.491482 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/78c16da3-2938-49d9-b36d-3d71fe0d48f3-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.491493 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4ml7\" (UniqueName: \"kubernetes.io/projected/78c16da3-2938-49d9-b36d-3d71fe0d48f3-kube-api-access-j4ml7\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.491507 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78c16da3-2938-49d9-b36d-3d71fe0d48f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.978122 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" event={"ID":"78c16da3-2938-49d9-b36d-3d71fe0d48f3","Type":"ContainerDied","Data":"5d9c9f7c690ec50a833bec1f866ba679d131888630ee0174c0ab8f67646dad3c"} Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.978180 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d9c9f7c690ec50a833bec1f866ba679d131888630ee0174c0ab8f67646dad3c" Jan 31 09:17:52 crc kubenswrapper[4732]: I0131 09:17:52.978502 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-pdbq2" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.220029 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b"] Jan 31 09:17:53 crc kubenswrapper[4732]: E0131 09:17:53.220483 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c16da3-2938-49d9-b36d-3d71fe0d48f3" containerName="swift-ring-rebalance" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.220510 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c16da3-2938-49d9-b36d-3d71fe0d48f3" containerName="swift-ring-rebalance" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.220779 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c16da3-2938-49d9-b36d-3d71fe0d48f3" containerName="swift-ring-rebalance" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.221544 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.224154 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.224239 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.229564 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b"] Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.302987 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b18c4acb-f5c3-45ef-b69a-a324e8f46803-dispersionconf\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.303076 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b18c4acb-f5c3-45ef-b69a-a324e8f46803-swiftconf\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.303343 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b18c4acb-f5c3-45ef-b69a-a324e8f46803-etc-swift\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.303491 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b18c4acb-f5c3-45ef-b69a-a324e8f46803-ring-data-devices\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.303584 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mgb8\" (UniqueName: \"kubernetes.io/projected/b18c4acb-f5c3-45ef-b69a-a324e8f46803-kube-api-access-8mgb8\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.303649 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b18c4acb-f5c3-45ef-b69a-a324e8f46803-scripts\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.405708 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b18c4acb-f5c3-45ef-b69a-a324e8f46803-ring-data-devices\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.405762 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mgb8\" (UniqueName: \"kubernetes.io/projected/b18c4acb-f5c3-45ef-b69a-a324e8f46803-kube-api-access-8mgb8\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.405789 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b18c4acb-f5c3-45ef-b69a-a324e8f46803-scripts\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.405865 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b18c4acb-f5c3-45ef-b69a-a324e8f46803-dispersionconf\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.405893 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b18c4acb-f5c3-45ef-b69a-a324e8f46803-swiftconf\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.405927 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b18c4acb-f5c3-45ef-b69a-a324e8f46803-etc-swift\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.406500 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b18c4acb-f5c3-45ef-b69a-a324e8f46803-etc-swift\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.406501 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b18c4acb-f5c3-45ef-b69a-a324e8f46803-scripts\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.406774 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b18c4acb-f5c3-45ef-b69a-a324e8f46803-ring-data-devices\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.411131 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b18c4acb-f5c3-45ef-b69a-a324e8f46803-dispersionconf\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.411955 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b18c4acb-f5c3-45ef-b69a-a324e8f46803-swiftconf\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.423190 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mgb8\" (UniqueName: \"kubernetes.io/projected/b18c4acb-f5c3-45ef-b69a-a324e8f46803-kube-api-access-8mgb8\") pod \"swift-ring-rebalance-debug-kqn2b\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.581493 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.841721 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b"] Jan 31 09:17:53 crc kubenswrapper[4732]: I0131 09:17:53.989235 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" event={"ID":"b18c4acb-f5c3-45ef-b69a-a324e8f46803","Type":"ContainerStarted","Data":"28b08967c440d2c1d91bd3c1c47e2882180961c7b75bf5e786868b7ee1372640"} Jan 31 09:17:54 crc kubenswrapper[4732]: I0131 09:17:54.999695 4732 generic.go:334] "Generic (PLEG): container finished" podID="b18c4acb-f5c3-45ef-b69a-a324e8f46803" containerID="3c637fef25b194b9f5bca1b27d2b6a30e6bc6beb823cf5161054f65dd6ad4520" exitCode=0 Jan 31 09:17:55 crc kubenswrapper[4732]: I0131 09:17:54.999791 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" event={"ID":"b18c4acb-f5c3-45ef-b69a-a324e8f46803","Type":"ContainerDied","Data":"3c637fef25b194b9f5bca1b27d2b6a30e6bc6beb823cf5161054f65dd6ad4520"} Jan 31 09:17:55 crc kubenswrapper[4732]: I0131 09:17:55.044237 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b"] Jan 31 09:17:55 crc kubenswrapper[4732]: I0131 09:17:55.056021 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b"] Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.264908 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.384562 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b18c4acb-f5c3-45ef-b69a-a324e8f46803-ring-data-devices\") pod \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.384737 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mgb8\" (UniqueName: \"kubernetes.io/projected/b18c4acb-f5c3-45ef-b69a-a324e8f46803-kube-api-access-8mgb8\") pod \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.384867 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b18c4acb-f5c3-45ef-b69a-a324e8f46803-swiftconf\") pod \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.384930 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b18c4acb-f5c3-45ef-b69a-a324e8f46803-etc-swift\") pod \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.385038 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b18c4acb-f5c3-45ef-b69a-a324e8f46803-scripts\") pod \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.385136 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b18c4acb-f5c3-45ef-b69a-a324e8f46803-dispersionconf\") pod \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\" (UID: \"b18c4acb-f5c3-45ef-b69a-a324e8f46803\") " Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.385448 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b18c4acb-f5c3-45ef-b69a-a324e8f46803-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b18c4acb-f5c3-45ef-b69a-a324e8f46803" (UID: "b18c4acb-f5c3-45ef-b69a-a324e8f46803"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.386625 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b18c4acb-f5c3-45ef-b69a-a324e8f46803-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b18c4acb-f5c3-45ef-b69a-a324e8f46803" (UID: "b18c4acb-f5c3-45ef-b69a-a324e8f46803"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.392258 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18c4acb-f5c3-45ef-b69a-a324e8f46803-kube-api-access-8mgb8" (OuterVolumeSpecName: "kube-api-access-8mgb8") pod "b18c4acb-f5c3-45ef-b69a-a324e8f46803" (UID: "b18c4acb-f5c3-45ef-b69a-a324e8f46803"). InnerVolumeSpecName "kube-api-access-8mgb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.411776 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b18c4acb-f5c3-45ef-b69a-a324e8f46803-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b18c4acb-f5c3-45ef-b69a-a324e8f46803" (UID: "b18c4acb-f5c3-45ef-b69a-a324e8f46803"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.414600 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b18c4acb-f5c3-45ef-b69a-a324e8f46803-scripts" (OuterVolumeSpecName: "scripts") pod "b18c4acb-f5c3-45ef-b69a-a324e8f46803" (UID: "b18c4acb-f5c3-45ef-b69a-a324e8f46803"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.422792 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b18c4acb-f5c3-45ef-b69a-a324e8f46803-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b18c4acb-f5c3-45ef-b69a-a324e8f46803" (UID: "b18c4acb-f5c3-45ef-b69a-a324e8f46803"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.486760 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh"] Jan 31 09:17:56 crc kubenswrapper[4732]: E0131 09:17:56.487115 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18c4acb-f5c3-45ef-b69a-a324e8f46803" containerName="swift-ring-rebalance" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.487140 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18c4acb-f5c3-45ef-b69a-a324e8f46803" containerName="swift-ring-rebalance" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.487394 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18c4acb-f5c3-45ef-b69a-a324e8f46803" containerName="swift-ring-rebalance" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.487734 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b18c4acb-f5c3-45ef-b69a-a324e8f46803-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.487775 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b18c4acb-f5c3-45ef-b69a-a324e8f46803-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.487794 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b18c4acb-f5c3-45ef-b69a-a324e8f46803-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.487807 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mgb8\" (UniqueName: \"kubernetes.io/projected/b18c4acb-f5c3-45ef-b69a-a324e8f46803-kube-api-access-8mgb8\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.487821 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b18c4acb-f5c3-45ef-b69a-a324e8f46803-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.487832 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b18c4acb-f5c3-45ef-b69a-a324e8f46803-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.487999 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.502949 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh"] Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.551450 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b18c4acb-f5c3-45ef-b69a-a324e8f46803" path="/var/lib/kubelet/pods/b18c4acb-f5c3-45ef-b69a-a324e8f46803/volumes" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.589336 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3323fc69-96ba-4767-aca9-a094ee4511fa-scripts\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.589385 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3323fc69-96ba-4767-aca9-a094ee4511fa-dispersionconf\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.589407 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3323fc69-96ba-4767-aca9-a094ee4511fa-ring-data-devices\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.589547 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3323fc69-96ba-4767-aca9-a094ee4511fa-etc-swift\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.589595 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3323fc69-96ba-4767-aca9-a094ee4511fa-swiftconf\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.589635 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zd8h\" (UniqueName: \"kubernetes.io/projected/3323fc69-96ba-4767-aca9-a094ee4511fa-kube-api-access-4zd8h\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.691229 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3323fc69-96ba-4767-aca9-a094ee4511fa-scripts\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.691320 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3323fc69-96ba-4767-aca9-a094ee4511fa-dispersionconf\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.691363 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3323fc69-96ba-4767-aca9-a094ee4511fa-ring-data-devices\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.691469 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3323fc69-96ba-4767-aca9-a094ee4511fa-etc-swift\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.691518 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3323fc69-96ba-4767-aca9-a094ee4511fa-swiftconf\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.691559 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zd8h\" (UniqueName: \"kubernetes.io/projected/3323fc69-96ba-4767-aca9-a094ee4511fa-kube-api-access-4zd8h\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.693018 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3323fc69-96ba-4767-aca9-a094ee4511fa-scripts\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.693533 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3323fc69-96ba-4767-aca9-a094ee4511fa-etc-swift\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.693719 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3323fc69-96ba-4767-aca9-a094ee4511fa-ring-data-devices\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.696147 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3323fc69-96ba-4767-aca9-a094ee4511fa-swiftconf\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.697128 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3323fc69-96ba-4767-aca9-a094ee4511fa-dispersionconf\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.713540 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zd8h\" (UniqueName: \"kubernetes.io/projected/3323fc69-96ba-4767-aca9-a094ee4511fa-kube-api-access-4zd8h\") pod \"swift-ring-rebalance-debug-2n2dh\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:56 crc kubenswrapper[4732]: I0131 09:17:56.812989 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:57 crc kubenswrapper[4732]: I0131 09:17:57.023731 4732 scope.go:117] "RemoveContainer" containerID="3c637fef25b194b9f5bca1b27d2b6a30e6bc6beb823cf5161054f65dd6ad4520" Jan 31 09:17:57 crc kubenswrapper[4732]: I0131 09:17:57.023764 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kqn2b" Jan 31 09:17:57 crc kubenswrapper[4732]: I0131 09:17:57.277827 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh"] Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.034910 4732 generic.go:334] "Generic (PLEG): container finished" podID="3323fc69-96ba-4767-aca9-a094ee4511fa" containerID="65b84d4ebc25368a071a65e4d20d0e365ba1b7010ab100f5e787fd36a34406cf" exitCode=0 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.035031 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" event={"ID":"3323fc69-96ba-4767-aca9-a094ee4511fa","Type":"ContainerDied","Data":"65b84d4ebc25368a071a65e4d20d0e365ba1b7010ab100f5e787fd36a34406cf"} Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.035474 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" event={"ID":"3323fc69-96ba-4767-aca9-a094ee4511fa","Type":"ContainerStarted","Data":"7b29da91d53315893aeacee191e80a9a9ee11a976647a732e826f2cf163e9590"} Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.081381 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh"] Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.089506 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh"] Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.180432 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-pdbq2"] Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.194390 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-pdbq2"] Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.205817 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206530 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-expirer" containerID="cri-o://171dd93dab917d5e3ad3010e206ac5f2aa6f0312273716ac344a73726703fbda" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206555 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-server" containerID="cri-o://f59d6e8e79e5f65eaab0514cd4a4f2da1508f44198f23637971d0e31f2b0f2be" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206579 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-auditor" containerID="cri-o://17809bd753b75791dd2fa65451f79147382e0f41d6052d606f342c4037233a9a" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206609 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-server" containerID="cri-o://0630d2ca7e09803aac756e9568f7669b86f328b502c8fbec24c1ab333009da4d" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206699 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-updater" containerID="cri-o://1839639d5b9952c70ff36e7c3b972e786bbfb15314211dc0fe15a6700ac4d580" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206723 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-replicator" containerID="cri-o://0939cdb203f97f24dcd9e0d0769010bdb9148423fa723be6da835b8c3fcd94a3" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206759 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-updater" containerID="cri-o://6c6f43f8261cf2448edf43a43564636ba04ed0b128e0df16f2584d3972a973a9" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206802 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="swift-recon-cron" containerID="cri-o://29d6915d3be45a209958c78f32e6cbfa5546c65adc3eb566381972012beea587" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206814 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-auditor" containerID="cri-o://b26aaf7029934891de5d0e92f8463ca1f8e106c401d898590351656c24bbc6e8" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206827 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-replicator" containerID="cri-o://ecfb0c165d8f537d03e0e64cf12e24a1575f070da3670ea4769b182ee59dcc20" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206859 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="rsync" containerID="cri-o://2038dec3545765c898c51198c887beb686d603d6257cd634255dc773c64b0920" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206869 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-auditor" containerID="cri-o://069a866bbc2edc4e0416ac1351e656e79b93160d557be633a57aa05933c9d37a" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206880 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-reaper" containerID="cri-o://f72f8dedea2940a6aab6b6081a6ac32f4fe8a817a636c7292b3740e840aacaf8" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206891 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-replicator" containerID="cri-o://0eb6a32653dab2879fd0e39ad42f712d86e6b8054d3507176f2d9dfda652811b" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.206931 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-server" containerID="cri-o://ef329846cba5f043851b0fac85462ce13c21ac2e48891f331c0b61745c33b6de" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.243712 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244223 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-server" containerID="cri-o://f47b66d5b1e0a89ad9b955fbf6dde3f0b21e522e86e33f379a1e2379b7919989" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244289 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-server" containerID="cri-o://95aedeb15c9b618726f462f7211b9b97027622f3a2b198dc9717394e81868db5" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244357 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-updater" containerID="cri-o://dcca573b69212cf173983d6090be0a12978f5decb7c01cbd3779f1d4d2a6b0d0" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244413 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-auditor" containerID="cri-o://de878ffff4963739e745def44cb56f453584bcd3f75e8cb3d9b85f0ca8d4e512" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244462 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-replicator" containerID="cri-o://750de579c99e37a98c19df2a11fee436a72f515c72747a3e91a0abc97cd07385" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244502 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-server" containerID="cri-o://3ca9c58e45467d768a6b9a7ce40fa6bc0ba4cec3710eae69487dc0106ba42c67" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244536 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-reaper" containerID="cri-o://f6d112eeaf369a7cae9a04806380ef040d7989006ab740a5b0d945ef5f7f317b" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244573 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-auditor" containerID="cri-o://5d118436256aaad4b93c230d8fa0456e8cc6f3c5ca6ce4c9d7f30f8db3c450ff" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244606 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-replicator" containerID="cri-o://bcb9d265c1a8c03fc26ba3b2f05fa7fcceb6cc59b711a2e0dd620cf846db5469" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244752 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-expirer" containerID="cri-o://bd9e9b782c4457e3bdc1029eec72a89f862eb48d009287030b3d15352e63d945" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244797 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-updater" containerID="cri-o://aba53546aab560ce992f9ed9d7c70a191eee00bcb246cc44c3bcf9dc6c6a964d" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244810 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="swift-recon-cron" containerID="cri-o://07ba04fcdc461974d0c6b09ac9da6b0c6610f36749f50f5eb47e063be68f5291" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244852 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="rsync" containerID="cri-o://3791aa9fdd8ee5b8255c509c99988f621034603e232b3c13e0287a092538c4df" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244875 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-auditor" containerID="cri-o://87b04e76193dc458b5e96db200183f64f7cda5171af7bf59a9b3b47ca7c00582" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.244932 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-replicator" containerID="cri-o://23a3dbdcb5e8c0d9b5e2b66e270ff60b28361fccb19ac39e68496d454544c773" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.264741 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.265298 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-server" containerID="cri-o://61cb3b1d16a1d1d363ee3738835e89a0f2f6d3986bf10918c00e484f7f8c9ef7" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.265751 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="swift-recon-cron" containerID="cri-o://81f782a2d54156b4990bcbb2dab091aeb970934a10ac85e7abc67c813255ffee" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.265807 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="rsync" containerID="cri-o://db1b9074ef0795f714675af65f50c4b87204369e4a7ad86e330547b5ab8f05fb" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.265840 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-expirer" containerID="cri-o://7f3c24b1866708a685d278257ae506a6fd79ff0289067bb9f41d1ad991336cc6" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.265870 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-updater" containerID="cri-o://5e6d65fa1c26ce7ebf7ff25c277b074ca08845044a94dbdd6b32ff300ac68b7e" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.265896 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-auditor" containerID="cri-o://b045d34930a290c72208028dc6d1d6b3535e4b8818d51998647df06b3f7c2bdc" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.265929 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-replicator" containerID="cri-o://288678dcae1157f1c7ebe15c590533bc3accdd196dbe20808122f33180c2092e" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.265970 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-server" containerID="cri-o://7907616b4b6ec30fa97533949de2bee3dfcee1ddac0b222cb83f6c75835a1755" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.266008 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-updater" containerID="cri-o://20b5017b8e043011a79ad2661ea73a50a94d52bc115728a4ab154b985c7430df" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.266038 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-auditor" containerID="cri-o://73d8c40ae242fa0366cc532a47eefd15e47b7f5aeeb79eb1c8113e496beaf6e2" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.266065 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-replicator" containerID="cri-o://2ab9eec16ab6fbc15e432a8ef644dbb519f0a4dd256590fb6f9676ffceba4611" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.266094 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-server" containerID="cri-o://6a55e95bf79b05700a43bb7800d35ec7f862e81feb1caac0721521392f5f8e7f" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.266121 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-reaper" containerID="cri-o://7f58b9ade96eae3a4c9b405f187c769232309e0c1fb241bf5444c4c304ab649f" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.266150 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-auditor" containerID="cri-o://dbfd2c11334695f732524b533be449dd61ebf7d7637f32b3bcfe1d1941b37863" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.266178 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-replicator" containerID="cri-o://b63fa3de54dc547aaba5e3c717d85f707b81c7144e50e21b9e827db634951724" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.295339 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8"] Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.295558 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" podUID="9e00e76d-2f89-454c-be3b-855e8186c78e" containerName="proxy-httpd" containerID="cri-o://5603cb6f58751f09a186156f4f600f7ee6585d6eeac0b6ca57c68aa2fa8a4f68" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.297160 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" podUID="9e00e76d-2f89-454c-be3b-855e8186c78e" containerName="proxy-server" containerID="cri-o://9fc5b88fc650585c4cf7595095cbcdfc58d2b17c1bb6d6f6621c5213c78ab1f0" gracePeriod=30 Jan 31 09:17:58 crc kubenswrapper[4732]: I0131 09:17:58.553384 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c16da3-2938-49d9-b36d-3d71fe0d48f3" path="/var/lib/kubelet/pods/78c16da3-2938-49d9-b36d-3d71fe0d48f3/volumes" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053373 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="bd9e9b782c4457e3bdc1029eec72a89f862eb48d009287030b3d15352e63d945" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053409 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="aba53546aab560ce992f9ed9d7c70a191eee00bcb246cc44c3bcf9dc6c6a964d" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053421 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="87b04e76193dc458b5e96db200183f64f7cda5171af7bf59a9b3b47ca7c00582" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053431 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="23a3dbdcb5e8c0d9b5e2b66e270ff60b28361fccb19ac39e68496d454544c773" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053439 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="dcca573b69212cf173983d6090be0a12978f5decb7c01cbd3779f1d4d2a6b0d0" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053448 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="de878ffff4963739e745def44cb56f453584bcd3f75e8cb3d9b85f0ca8d4e512" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053460 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="750de579c99e37a98c19df2a11fee436a72f515c72747a3e91a0abc97cd07385" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053470 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="f6d112eeaf369a7cae9a04806380ef040d7989006ab740a5b0d945ef5f7f317b" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053479 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="5d118436256aaad4b93c230d8fa0456e8cc6f3c5ca6ce4c9d7f30f8db3c450ff" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053487 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="bcb9d265c1a8c03fc26ba3b2f05fa7fcceb6cc59b711a2e0dd620cf846db5469" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053497 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="f47b66d5b1e0a89ad9b955fbf6dde3f0b21e522e86e33f379a1e2379b7919989" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053453 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"bd9e9b782c4457e3bdc1029eec72a89f862eb48d009287030b3d15352e63d945"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053568 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"aba53546aab560ce992f9ed9d7c70a191eee00bcb246cc44c3bcf9dc6c6a964d"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053584 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"87b04e76193dc458b5e96db200183f64f7cda5171af7bf59a9b3b47ca7c00582"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053598 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"23a3dbdcb5e8c0d9b5e2b66e270ff60b28361fccb19ac39e68496d454544c773"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053610 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"dcca573b69212cf173983d6090be0a12978f5decb7c01cbd3779f1d4d2a6b0d0"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053620 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"de878ffff4963739e745def44cb56f453584bcd3f75e8cb3d9b85f0ca8d4e512"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053630 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"750de579c99e37a98c19df2a11fee436a72f515c72747a3e91a0abc97cd07385"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053641 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"f6d112eeaf369a7cae9a04806380ef040d7989006ab740a5b0d945ef5f7f317b"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053651 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"5d118436256aaad4b93c230d8fa0456e8cc6f3c5ca6ce4c9d7f30f8db3c450ff"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053675 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"bcb9d265c1a8c03fc26ba3b2f05fa7fcceb6cc59b711a2e0dd620cf846db5469"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.053687 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"f47b66d5b1e0a89ad9b955fbf6dde3f0b21e522e86e33f379a1e2379b7919989"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061006 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="171dd93dab917d5e3ad3010e206ac5f2aa6f0312273716ac344a73726703fbda" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061055 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="1839639d5b9952c70ff36e7c3b972e786bbfb15314211dc0fe15a6700ac4d580" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061070 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="17809bd753b75791dd2fa65451f79147382e0f41d6052d606f342c4037233a9a" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061084 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="0939cdb203f97f24dcd9e0d0769010bdb9148423fa723be6da835b8c3fcd94a3" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061096 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="6c6f43f8261cf2448edf43a43564636ba04ed0b128e0df16f2584d3972a973a9" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061108 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="b26aaf7029934891de5d0e92f8463ca1f8e106c401d898590351656c24bbc6e8" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061121 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="ecfb0c165d8f537d03e0e64cf12e24a1575f070da3670ea4769b182ee59dcc20" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061134 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="0630d2ca7e09803aac756e9568f7669b86f328b502c8fbec24c1ab333009da4d" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061146 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="f72f8dedea2940a6aab6b6081a6ac32f4fe8a817a636c7292b3740e840aacaf8" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061158 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="069a866bbc2edc4e0416ac1351e656e79b93160d557be633a57aa05933c9d37a" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061170 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="0eb6a32653dab2879fd0e39ad42f712d86e6b8054d3507176f2d9dfda652811b" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061234 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"171dd93dab917d5e3ad3010e206ac5f2aa6f0312273716ac344a73726703fbda"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061272 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"1839639d5b9952c70ff36e7c3b972e786bbfb15314211dc0fe15a6700ac4d580"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061292 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"17809bd753b75791dd2fa65451f79147382e0f41d6052d606f342c4037233a9a"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061309 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"0939cdb203f97f24dcd9e0d0769010bdb9148423fa723be6da835b8c3fcd94a3"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061327 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"6c6f43f8261cf2448edf43a43564636ba04ed0b128e0df16f2584d3972a973a9"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061346 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"b26aaf7029934891de5d0e92f8463ca1f8e106c401d898590351656c24bbc6e8"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061363 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"ecfb0c165d8f537d03e0e64cf12e24a1575f070da3670ea4769b182ee59dcc20"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061382 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"0630d2ca7e09803aac756e9568f7669b86f328b502c8fbec24c1ab333009da4d"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061399 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"f72f8dedea2940a6aab6b6081a6ac32f4fe8a817a636c7292b3740e840aacaf8"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061416 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"069a866bbc2edc4e0416ac1351e656e79b93160d557be633a57aa05933c9d37a"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.061432 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"0eb6a32653dab2879fd0e39ad42f712d86e6b8054d3507176f2d9dfda652811b"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067185 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="7f3c24b1866708a685d278257ae506a6fd79ff0289067bb9f41d1ad991336cc6" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067211 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="5e6d65fa1c26ce7ebf7ff25c277b074ca08845044a94dbdd6b32ff300ac68b7e" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067225 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="b045d34930a290c72208028dc6d1d6b3535e4b8818d51998647df06b3f7c2bdc" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067241 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="288678dcae1157f1c7ebe15c590533bc3accdd196dbe20808122f33180c2092e" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067253 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="20b5017b8e043011a79ad2661ea73a50a94d52bc115728a4ab154b985c7430df" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067265 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="73d8c40ae242fa0366cc532a47eefd15e47b7f5aeeb79eb1c8113e496beaf6e2" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067280 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="2ab9eec16ab6fbc15e432a8ef644dbb519f0a4dd256590fb6f9676ffceba4611" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067292 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="7f58b9ade96eae3a4c9b405f187c769232309e0c1fb241bf5444c4c304ab649f" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067304 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="dbfd2c11334695f732524b533be449dd61ebf7d7637f32b3bcfe1d1941b37863" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067315 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="b63fa3de54dc547aaba5e3c717d85f707b81c7144e50e21b9e827db634951724" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067371 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"7f3c24b1866708a685d278257ae506a6fd79ff0289067bb9f41d1ad991336cc6"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067399 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"5e6d65fa1c26ce7ebf7ff25c277b074ca08845044a94dbdd6b32ff300ac68b7e"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067418 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"b045d34930a290c72208028dc6d1d6b3535e4b8818d51998647df06b3f7c2bdc"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067435 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"288678dcae1157f1c7ebe15c590533bc3accdd196dbe20808122f33180c2092e"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067452 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"20b5017b8e043011a79ad2661ea73a50a94d52bc115728a4ab154b985c7430df"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067470 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"73d8c40ae242fa0366cc532a47eefd15e47b7f5aeeb79eb1c8113e496beaf6e2"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067487 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"2ab9eec16ab6fbc15e432a8ef644dbb519f0a4dd256590fb6f9676ffceba4611"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067502 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"7f58b9ade96eae3a4c9b405f187c769232309e0c1fb241bf5444c4c304ab649f"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067518 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"dbfd2c11334695f732524b533be449dd61ebf7d7637f32b3bcfe1d1941b37863"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.067534 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"b63fa3de54dc547aaba5e3c717d85f707b81c7144e50e21b9e827db634951724"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.068995 4732 generic.go:334] "Generic (PLEG): container finished" podID="9e00e76d-2f89-454c-be3b-855e8186c78e" containerID="5603cb6f58751f09a186156f4f600f7ee6585d6eeac0b6ca57c68aa2fa8a4f68" exitCode=0 Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.069262 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" event={"ID":"9e00e76d-2f89-454c-be3b-855e8186c78e","Type":"ContainerDied","Data":"5603cb6f58751f09a186156f4f600f7ee6585d6eeac0b6ca57c68aa2fa8a4f68"} Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.479889 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.588610 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" podUID="9e00e76d-2f89-454c-be3b-855e8186c78e" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.100:8080/healthcheck\": dial tcp 10.217.0.100:8080: connect: connection refused" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.588744 4732 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" podUID="9e00e76d-2f89-454c-be3b-855e8186c78e" containerName="proxy-server" probeResult="failure" output="Get \"http://10.217.0.100:8080/healthcheck\": dial tcp 10.217.0.100:8080: connect: connection refused" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.645339 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3323fc69-96ba-4767-aca9-a094ee4511fa-swiftconf\") pod \"3323fc69-96ba-4767-aca9-a094ee4511fa\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.645470 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3323fc69-96ba-4767-aca9-a094ee4511fa-scripts\") pod \"3323fc69-96ba-4767-aca9-a094ee4511fa\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.645524 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3323fc69-96ba-4767-aca9-a094ee4511fa-ring-data-devices\") pod \"3323fc69-96ba-4767-aca9-a094ee4511fa\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.645609 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3323fc69-96ba-4767-aca9-a094ee4511fa-etc-swift\") pod \"3323fc69-96ba-4767-aca9-a094ee4511fa\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.645771 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3323fc69-96ba-4767-aca9-a094ee4511fa-dispersionconf\") pod \"3323fc69-96ba-4767-aca9-a094ee4511fa\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.645814 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zd8h\" (UniqueName: \"kubernetes.io/projected/3323fc69-96ba-4767-aca9-a094ee4511fa-kube-api-access-4zd8h\") pod \"3323fc69-96ba-4767-aca9-a094ee4511fa\" (UID: \"3323fc69-96ba-4767-aca9-a094ee4511fa\") " Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.647392 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3323fc69-96ba-4767-aca9-a094ee4511fa-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3323fc69-96ba-4767-aca9-a094ee4511fa" (UID: "3323fc69-96ba-4767-aca9-a094ee4511fa"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.648123 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3323fc69-96ba-4767-aca9-a094ee4511fa-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3323fc69-96ba-4767-aca9-a094ee4511fa" (UID: "3323fc69-96ba-4767-aca9-a094ee4511fa"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.661984 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3323fc69-96ba-4767-aca9-a094ee4511fa-kube-api-access-4zd8h" (OuterVolumeSpecName: "kube-api-access-4zd8h") pod "3323fc69-96ba-4767-aca9-a094ee4511fa" (UID: "3323fc69-96ba-4767-aca9-a094ee4511fa"). InnerVolumeSpecName "kube-api-access-4zd8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.669016 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3323fc69-96ba-4767-aca9-a094ee4511fa-scripts" (OuterVolumeSpecName: "scripts") pod "3323fc69-96ba-4767-aca9-a094ee4511fa" (UID: "3323fc69-96ba-4767-aca9-a094ee4511fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.670216 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3323fc69-96ba-4767-aca9-a094ee4511fa-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3323fc69-96ba-4767-aca9-a094ee4511fa" (UID: "3323fc69-96ba-4767-aca9-a094ee4511fa"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.670773 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3323fc69-96ba-4767-aca9-a094ee4511fa-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3323fc69-96ba-4767-aca9-a094ee4511fa" (UID: "3323fc69-96ba-4767-aca9-a094ee4511fa"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.747494 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3323fc69-96ba-4767-aca9-a094ee4511fa-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.747526 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zd8h\" (UniqueName: \"kubernetes.io/projected/3323fc69-96ba-4767-aca9-a094ee4511fa-kube-api-access-4zd8h\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.747539 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3323fc69-96ba-4767-aca9-a094ee4511fa-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.747547 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3323fc69-96ba-4767-aca9-a094ee4511fa-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.747557 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3323fc69-96ba-4767-aca9-a094ee4511fa-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:17:59 crc kubenswrapper[4732]: I0131 09:17:59.747566 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3323fc69-96ba-4767-aca9-a094ee4511fa-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.076940 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-2n2dh" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.076960 4732 scope.go:117] "RemoveContainer" containerID="65b84d4ebc25368a071a65e4d20d0e365ba1b7010ab100f5e787fd36a34406cf" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.088045 4732 generic.go:334] "Generic (PLEG): container finished" podID="9e00e76d-2f89-454c-be3b-855e8186c78e" containerID="9fc5b88fc650585c4cf7595095cbcdfc58d2b17c1bb6d6f6621c5213c78ab1f0" exitCode=0 Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.088122 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" event={"ID":"9e00e76d-2f89-454c-be3b-855e8186c78e","Type":"ContainerDied","Data":"9fc5b88fc650585c4cf7595095cbcdfc58d2b17c1bb6d6f6621c5213c78ab1f0"} Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.101296 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="3791aa9fdd8ee5b8255c509c99988f621034603e232b3c13e0287a092538c4df" exitCode=0 Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.101327 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="95aedeb15c9b618726f462f7211b9b97027622f3a2b198dc9717394e81868db5" exitCode=0 Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.101336 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="3ca9c58e45467d768a6b9a7ce40fa6bc0ba4cec3710eae69487dc0106ba42c67" exitCode=0 Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.101326 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"3791aa9fdd8ee5b8255c509c99988f621034603e232b3c13e0287a092538c4df"} Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.101369 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"95aedeb15c9b618726f462f7211b9b97027622f3a2b198dc9717394e81868db5"} Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.101382 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"3ca9c58e45467d768a6b9a7ce40fa6bc0ba4cec3710eae69487dc0106ba42c67"} Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.128886 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="2038dec3545765c898c51198c887beb686d603d6257cd634255dc773c64b0920" exitCode=0 Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.128913 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="f59d6e8e79e5f65eaab0514cd4a4f2da1508f44198f23637971d0e31f2b0f2be" exitCode=0 Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.128922 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="ef329846cba5f043851b0fac85462ce13c21ac2e48891f331c0b61745c33b6de" exitCode=0 Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.128939 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"2038dec3545765c898c51198c887beb686d603d6257cd634255dc773c64b0920"} Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.128981 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"f59d6e8e79e5f65eaab0514cd4a4f2da1508f44198f23637971d0e31f2b0f2be"} Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.128995 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"ef329846cba5f043851b0fac85462ce13c21ac2e48891f331c0b61745c33b6de"} Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.140611 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="db1b9074ef0795f714675af65f50c4b87204369e4a7ad86e330547b5ab8f05fb" exitCode=0 Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.140645 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="7907616b4b6ec30fa97533949de2bee3dfcee1ddac0b222cb83f6c75835a1755" exitCode=0 Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.140656 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="6a55e95bf79b05700a43bb7800d35ec7f862e81feb1caac0721521392f5f8e7f" exitCode=0 Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.140682 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="61cb3b1d16a1d1d363ee3738835e89a0f2f6d3986bf10918c00e484f7f8c9ef7" exitCode=0 Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.140708 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"db1b9074ef0795f714675af65f50c4b87204369e4a7ad86e330547b5ab8f05fb"} Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.140738 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"7907616b4b6ec30fa97533949de2bee3dfcee1ddac0b222cb83f6c75835a1755"} Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.140751 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"6a55e95bf79b05700a43bb7800d35ec7f862e81feb1caac0721521392f5f8e7f"} Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.140762 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"61cb3b1d16a1d1d363ee3738835e89a0f2f6d3986bf10918c00e484f7f8c9ef7"} Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.400689 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.556219 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift\") pod \"9e00e76d-2f89-454c-be3b-855e8186c78e\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.556303 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e00e76d-2f89-454c-be3b-855e8186c78e-log-httpd\") pod \"9e00e76d-2f89-454c-be3b-855e8186c78e\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.556405 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k5xz\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-kube-api-access-5k5xz\") pod \"9e00e76d-2f89-454c-be3b-855e8186c78e\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.556432 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e00e76d-2f89-454c-be3b-855e8186c78e-config-data\") pod \"9e00e76d-2f89-454c-be3b-855e8186c78e\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.556471 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e00e76d-2f89-454c-be3b-855e8186c78e-run-httpd\") pod \"9e00e76d-2f89-454c-be3b-855e8186c78e\" (UID: \"9e00e76d-2f89-454c-be3b-855e8186c78e\") " Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.556945 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e00e76d-2f89-454c-be3b-855e8186c78e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9e00e76d-2f89-454c-be3b-855e8186c78e" (UID: "9e00e76d-2f89-454c-be3b-855e8186c78e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.557332 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e00e76d-2f89-454c-be3b-855e8186c78e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9e00e76d-2f89-454c-be3b-855e8186c78e" (UID: "9e00e76d-2f89-454c-be3b-855e8186c78e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.557579 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3323fc69-96ba-4767-aca9-a094ee4511fa" path="/var/lib/kubelet/pods/3323fc69-96ba-4767-aca9-a094ee4511fa/volumes" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.561815 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9e00e76d-2f89-454c-be3b-855e8186c78e" (UID: "9e00e76d-2f89-454c-be3b-855e8186c78e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.570032 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-kube-api-access-5k5xz" (OuterVolumeSpecName: "kube-api-access-5k5xz") pod "9e00e76d-2f89-454c-be3b-855e8186c78e" (UID: "9e00e76d-2f89-454c-be3b-855e8186c78e"). InnerVolumeSpecName "kube-api-access-5k5xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.592344 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e00e76d-2f89-454c-be3b-855e8186c78e-config-data" (OuterVolumeSpecName: "config-data") pod "9e00e76d-2f89-454c-be3b-855e8186c78e" (UID: "9e00e76d-2f89-454c-be3b-855e8186c78e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.659033 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k5xz\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-kube-api-access-5k5xz\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.659120 4732 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e00e76d-2f89-454c-be3b-855e8186c78e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.659144 4732 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e00e76d-2f89-454c-be3b-855e8186c78e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.659163 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e00e76d-2f89-454c-be3b-855e8186c78e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:00 crc kubenswrapper[4732]: I0131 09:18:00.659183 4732 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e00e76d-2f89-454c-be3b-855e8186c78e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:01 crc kubenswrapper[4732]: I0131 09:18:01.153302 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" event={"ID":"9e00e76d-2f89-454c-be3b-855e8186c78e","Type":"ContainerDied","Data":"70a5df12fcc55bf7e0349357dc9e7a70341a9c65f4cf5241e077476ae04d6820"} Jan 31 09:18:01 crc kubenswrapper[4732]: I0131 09:18:01.153754 4732 scope.go:117] "RemoveContainer" containerID="9fc5b88fc650585c4cf7595095cbcdfc58d2b17c1bb6d6f6621c5213c78ab1f0" Jan 31 09:18:01 crc kubenswrapper[4732]: I0131 09:18:01.153391 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8" Jan 31 09:18:01 crc kubenswrapper[4732]: I0131 09:18:01.177266 4732 scope.go:117] "RemoveContainer" containerID="5603cb6f58751f09a186156f4f600f7ee6585d6eeac0b6ca57c68aa2fa8a4f68" Jan 31 09:18:01 crc kubenswrapper[4732]: I0131 09:18:01.195886 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8"] Jan 31 09:18:01 crc kubenswrapper[4732]: I0131 09:18:01.208158 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-f2jx8"] Jan 31 09:18:02 crc kubenswrapper[4732]: I0131 09:18:02.555818 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e00e76d-2f89-454c-be3b-855e8186c78e" path="/var/lib/kubelet/pods/9e00e76d-2f89-454c-be3b-855e8186c78e/volumes" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.435829 4732 generic.go:334] "Generic (PLEG): container finished" podID="410ee08c-4c6c-4012-aa46-264179923617" containerID="81f782a2d54156b4990bcbb2dab091aeb970934a10ac85e7abc67c813255ffee" exitCode=137 Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.435872 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"81f782a2d54156b4990bcbb2dab091aeb970934a10ac85e7abc67c813255ffee"} Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.450193 4732 generic.go:334] "Generic (PLEG): container finished" podID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerID="07ba04fcdc461974d0c6b09ac9da6b0c6610f36749f50f5eb47e063be68f5291" exitCode=137 Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.450247 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"07ba04fcdc461974d0c6b09ac9da6b0c6610f36749f50f5eb47e063be68f5291"} Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.468725 4732 generic.go:334] "Generic (PLEG): container finished" podID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerID="29d6915d3be45a209958c78f32e6cbfa5546c65adc3eb566381972012beea587" exitCode=137 Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.468778 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"29d6915d3be45a209958c78f32e6cbfa5546c65adc3eb566381972012beea587"} Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.731301 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.736575 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.739938 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.792587 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/20196d3e-600c-4a25-97ef-86f81bfae43b-etc-swift\") pod \"20196d3e-600c-4a25-97ef-86f81bfae43b\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.792687 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/410ee08c-4c6c-4012-aa46-264179923617-cache\") pod \"410ee08c-4c6c-4012-aa46-264179923617\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.792716 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/485e2c17-77f1-4b13-ad2a-1afe1034b82e-lock\") pod \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.792743 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/410ee08c-4c6c-4012-aa46-264179923617-lock\") pod \"410ee08c-4c6c-4012-aa46-264179923617\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.792778 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/485e2c17-77f1-4b13-ad2a-1afe1034b82e-etc-swift\") pod \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.792825 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt7ch\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-kube-api-access-pt7ch\") pod \"410ee08c-4c6c-4012-aa46-264179923617\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.792852 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/20196d3e-600c-4a25-97ef-86f81bfae43b-lock\") pod \"20196d3e-600c-4a25-97ef-86f81bfae43b\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.792877 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/485e2c17-77f1-4b13-ad2a-1afe1034b82e-cache\") pod \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.792895 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"410ee08c-4c6c-4012-aa46-264179923617\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.792940 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.792961 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift\") pod \"410ee08c-4c6c-4012-aa46-264179923617\" (UID: \"410ee08c-4c6c-4012-aa46-264179923617\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.792981 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/20196d3e-600c-4a25-97ef-86f81bfae43b-cache\") pod \"20196d3e-600c-4a25-97ef-86f81bfae43b\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.793028 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gtpt\" (UniqueName: \"kubernetes.io/projected/20196d3e-600c-4a25-97ef-86f81bfae43b-kube-api-access-5gtpt\") pod \"20196d3e-600c-4a25-97ef-86f81bfae43b\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.793066 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqsrn\" (UniqueName: \"kubernetes.io/projected/485e2c17-77f1-4b13-ad2a-1afe1034b82e-kube-api-access-kqsrn\") pod \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\" (UID: \"485e2c17-77f1-4b13-ad2a-1afe1034b82e\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.793102 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"20196d3e-600c-4a25-97ef-86f81bfae43b\" (UID: \"20196d3e-600c-4a25-97ef-86f81bfae43b\") " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.795778 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/485e2c17-77f1-4b13-ad2a-1afe1034b82e-cache" (OuterVolumeSpecName: "cache") pod "485e2c17-77f1-4b13-ad2a-1afe1034b82e" (UID: "485e2c17-77f1-4b13-ad2a-1afe1034b82e"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.796777 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/485e2c17-77f1-4b13-ad2a-1afe1034b82e-lock" (OuterVolumeSpecName: "lock") pod "485e2c17-77f1-4b13-ad2a-1afe1034b82e" (UID: "485e2c17-77f1-4b13-ad2a-1afe1034b82e"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.796884 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20196d3e-600c-4a25-97ef-86f81bfae43b-cache" (OuterVolumeSpecName: "cache") pod "20196d3e-600c-4a25-97ef-86f81bfae43b" (UID: "20196d3e-600c-4a25-97ef-86f81bfae43b"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.799108 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410ee08c-4c6c-4012-aa46-264179923617-cache" (OuterVolumeSpecName: "cache") pod "410ee08c-4c6c-4012-aa46-264179923617" (UID: "410ee08c-4c6c-4012-aa46-264179923617"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.799531 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20196d3e-600c-4a25-97ef-86f81bfae43b-lock" (OuterVolumeSpecName: "lock") pod "20196d3e-600c-4a25-97ef-86f81bfae43b" (UID: "20196d3e-600c-4a25-97ef-86f81bfae43b"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.799905 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410ee08c-4c6c-4012-aa46-264179923617-lock" (OuterVolumeSpecName: "lock") pod "410ee08c-4c6c-4012-aa46-264179923617" (UID: "410ee08c-4c6c-4012-aa46-264179923617"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.800456 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "410ee08c-4c6c-4012-aa46-264179923617" (UID: "410ee08c-4c6c-4012-aa46-264179923617"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.800555 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20196d3e-600c-4a25-97ef-86f81bfae43b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "20196d3e-600c-4a25-97ef-86f81bfae43b" (UID: "20196d3e-600c-4a25-97ef-86f81bfae43b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.800739 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "swift") pod "20196d3e-600c-4a25-97ef-86f81bfae43b" (UID: "20196d3e-600c-4a25-97ef-86f81bfae43b"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.802624 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "swift") pod "485e2c17-77f1-4b13-ad2a-1afe1034b82e" (UID: "485e2c17-77f1-4b13-ad2a-1afe1034b82e"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.803450 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485e2c17-77f1-4b13-ad2a-1afe1034b82e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "485e2c17-77f1-4b13-ad2a-1afe1034b82e" (UID: "485e2c17-77f1-4b13-ad2a-1afe1034b82e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.806107 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "swift") pod "410ee08c-4c6c-4012-aa46-264179923617" (UID: "410ee08c-4c6c-4012-aa46-264179923617"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.807149 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-kube-api-access-pt7ch" (OuterVolumeSpecName: "kube-api-access-pt7ch") pod "410ee08c-4c6c-4012-aa46-264179923617" (UID: "410ee08c-4c6c-4012-aa46-264179923617"). InnerVolumeSpecName "kube-api-access-pt7ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.808824 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485e2c17-77f1-4b13-ad2a-1afe1034b82e-kube-api-access-kqsrn" (OuterVolumeSpecName: "kube-api-access-kqsrn") pod "485e2c17-77f1-4b13-ad2a-1afe1034b82e" (UID: "485e2c17-77f1-4b13-ad2a-1afe1034b82e"). InnerVolumeSpecName "kube-api-access-kqsrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.812866 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20196d3e-600c-4a25-97ef-86f81bfae43b-kube-api-access-5gtpt" (OuterVolumeSpecName: "kube-api-access-5gtpt") pod "20196d3e-600c-4a25-97ef-86f81bfae43b" (UID: "20196d3e-600c-4a25-97ef-86f81bfae43b"). InnerVolumeSpecName "kube-api-access-5gtpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894320 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/20196d3e-600c-4a25-97ef-86f81bfae43b-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894363 4732 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/410ee08c-4c6c-4012-aa46-264179923617-cache\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894373 4732 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/485e2c17-77f1-4b13-ad2a-1afe1034b82e-lock\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894381 4732 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/410ee08c-4c6c-4012-aa46-264179923617-lock\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894390 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/485e2c17-77f1-4b13-ad2a-1afe1034b82e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894399 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt7ch\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-kube-api-access-pt7ch\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894411 4732 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/20196d3e-600c-4a25-97ef-86f81bfae43b-lock\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894419 4732 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/485e2c17-77f1-4b13-ad2a-1afe1034b82e-cache\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894451 4732 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894464 4732 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894472 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/410ee08c-4c6c-4012-aa46-264179923617-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894481 4732 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/20196d3e-600c-4a25-97ef-86f81bfae43b-cache\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894489 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gtpt\" (UniqueName: \"kubernetes.io/projected/20196d3e-600c-4a25-97ef-86f81bfae43b-kube-api-access-5gtpt\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894498 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqsrn\" (UniqueName: \"kubernetes.io/projected/485e2c17-77f1-4b13-ad2a-1afe1034b82e-kube-api-access-kqsrn\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.894511 4732 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.906973 4732 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.908712 4732 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.909710 4732 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.996500 4732 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.996540 4732 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:28 crc kubenswrapper[4732]: I0131 09:18:28.996549 4732 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.492838 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"20196d3e-600c-4a25-97ef-86f81bfae43b","Type":"ContainerDied","Data":"87f5d0b315407f4d9f600594a48f694f95c4330a184853c9d3591930eb2df4a1"} Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.492901 4732 scope.go:117] "RemoveContainer" containerID="07ba04fcdc461974d0c6b09ac9da6b0c6610f36749f50f5eb47e063be68f5291" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.493085 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.507626 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.507895 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"485e2c17-77f1-4b13-ad2a-1afe1034b82e","Type":"ContainerDied","Data":"74e38e927b7f9f5222a587afffa38799ef87d240853e804c1d4e908457454209"} Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.518540 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"410ee08c-4c6c-4012-aa46-264179923617","Type":"ContainerDied","Data":"bd56bd5520be61343a090a3833bf69ab0722a78470f84b63a8f6a8b06d85cd3e"} Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.518722 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.519740 4732 scope.go:117] "RemoveContainer" containerID="3791aa9fdd8ee5b8255c509c99988f621034603e232b3c13e0287a092538c4df" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.537320 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.544615 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.547947 4732 scope.go:117] "RemoveContainer" containerID="bd9e9b782c4457e3bdc1029eec72a89f862eb48d009287030b3d15352e63d945" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.573081 4732 scope.go:117] "RemoveContainer" containerID="aba53546aab560ce992f9ed9d7c70a191eee00bcb246cc44c3bcf9dc6c6a964d" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.586366 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.593219 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.593985 4732 scope.go:117] "RemoveContainer" containerID="87b04e76193dc458b5e96db200183f64f7cda5171af7bf59a9b3b47ca7c00582" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.600008 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.606633 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.610735 4732 scope.go:117] "RemoveContainer" containerID="23a3dbdcb5e8c0d9b5e2b66e270ff60b28361fccb19ac39e68496d454544c773" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.624819 4732 scope.go:117] "RemoveContainer" containerID="95aedeb15c9b618726f462f7211b9b97027622f3a2b198dc9717394e81868db5" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.638750 4732 scope.go:117] "RemoveContainer" containerID="dcca573b69212cf173983d6090be0a12978f5decb7c01cbd3779f1d4d2a6b0d0" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.659281 4732 scope.go:117] "RemoveContainer" containerID="de878ffff4963739e745def44cb56f453584bcd3f75e8cb3d9b85f0ca8d4e512" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.674277 4732 scope.go:117] "RemoveContainer" containerID="750de579c99e37a98c19df2a11fee436a72f515c72747a3e91a0abc97cd07385" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.687316 4732 scope.go:117] "RemoveContainer" containerID="3ca9c58e45467d768a6b9a7ce40fa6bc0ba4cec3710eae69487dc0106ba42c67" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.699768 4732 scope.go:117] "RemoveContainer" containerID="f6d112eeaf369a7cae9a04806380ef040d7989006ab740a5b0d945ef5f7f317b" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.720945 4732 scope.go:117] "RemoveContainer" containerID="5d118436256aaad4b93c230d8fa0456e8cc6f3c5ca6ce4c9d7f30f8db3c450ff" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.736337 4732 scope.go:117] "RemoveContainer" containerID="bcb9d265c1a8c03fc26ba3b2f05fa7fcceb6cc59b711a2e0dd620cf846db5469" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.752118 4732 scope.go:117] "RemoveContainer" containerID="f47b66d5b1e0a89ad9b955fbf6dde3f0b21e522e86e33f379a1e2379b7919989" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.767507 4732 scope.go:117] "RemoveContainer" containerID="29d6915d3be45a209958c78f32e6cbfa5546c65adc3eb566381972012beea587" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.783984 4732 scope.go:117] "RemoveContainer" containerID="2038dec3545765c898c51198c887beb686d603d6257cd634255dc773c64b0920" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.799400 4732 scope.go:117] "RemoveContainer" containerID="171dd93dab917d5e3ad3010e206ac5f2aa6f0312273716ac344a73726703fbda" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.812897 4732 scope.go:117] "RemoveContainer" containerID="1839639d5b9952c70ff36e7c3b972e786bbfb15314211dc0fe15a6700ac4d580" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.826205 4732 scope.go:117] "RemoveContainer" containerID="17809bd753b75791dd2fa65451f79147382e0f41d6052d606f342c4037233a9a" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.841728 4732 scope.go:117] "RemoveContainer" containerID="0939cdb203f97f24dcd9e0d0769010bdb9148423fa723be6da835b8c3fcd94a3" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.868084 4732 scope.go:117] "RemoveContainer" containerID="f59d6e8e79e5f65eaab0514cd4a4f2da1508f44198f23637971d0e31f2b0f2be" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.891853 4732 scope.go:117] "RemoveContainer" containerID="6c6f43f8261cf2448edf43a43564636ba04ed0b128e0df16f2584d3972a973a9" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.907053 4732 scope.go:117] "RemoveContainer" containerID="b26aaf7029934891de5d0e92f8463ca1f8e106c401d898590351656c24bbc6e8" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.919874 4732 scope.go:117] "RemoveContainer" containerID="ecfb0c165d8f537d03e0e64cf12e24a1575f070da3670ea4769b182ee59dcc20" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.938827 4732 scope.go:117] "RemoveContainer" containerID="0630d2ca7e09803aac756e9568f7669b86f328b502c8fbec24c1ab333009da4d" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.955439 4732 scope.go:117] "RemoveContainer" containerID="f72f8dedea2940a6aab6b6081a6ac32f4fe8a817a636c7292b3740e840aacaf8" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.974172 4732 scope.go:117] "RemoveContainer" containerID="069a866bbc2edc4e0416ac1351e656e79b93160d557be633a57aa05933c9d37a" Jan 31 09:18:29 crc kubenswrapper[4732]: I0131 09:18:29.990202 4732 scope.go:117] "RemoveContainer" containerID="0eb6a32653dab2879fd0e39ad42f712d86e6b8054d3507176f2d9dfda652811b" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.008298 4732 scope.go:117] "RemoveContainer" containerID="ef329846cba5f043851b0fac85462ce13c21ac2e48891f331c0b61745c33b6de" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.025153 4732 scope.go:117] "RemoveContainer" containerID="81f782a2d54156b4990bcbb2dab091aeb970934a10ac85e7abc67c813255ffee" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.041368 4732 scope.go:117] "RemoveContainer" containerID="db1b9074ef0795f714675af65f50c4b87204369e4a7ad86e330547b5ab8f05fb" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.099018 4732 scope.go:117] "RemoveContainer" containerID="7f3c24b1866708a685d278257ae506a6fd79ff0289067bb9f41d1ad991336cc6" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.156849 4732 scope.go:117] "RemoveContainer" containerID="5e6d65fa1c26ce7ebf7ff25c277b074ca08845044a94dbdd6b32ff300ac68b7e" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.189856 4732 scope.go:117] "RemoveContainer" containerID="b045d34930a290c72208028dc6d1d6b3535e4b8818d51998647df06b3f7c2bdc" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.207793 4732 scope.go:117] "RemoveContainer" containerID="288678dcae1157f1c7ebe15c590533bc3accdd196dbe20808122f33180c2092e" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.222323 4732 scope.go:117] "RemoveContainer" containerID="7907616b4b6ec30fa97533949de2bee3dfcee1ddac0b222cb83f6c75835a1755" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.237209 4732 scope.go:117] "RemoveContainer" containerID="20b5017b8e043011a79ad2661ea73a50a94d52bc115728a4ab154b985c7430df" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.254535 4732 scope.go:117] "RemoveContainer" containerID="73d8c40ae242fa0366cc532a47eefd15e47b7f5aeeb79eb1c8113e496beaf6e2" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.270147 4732 scope.go:117] "RemoveContainer" containerID="2ab9eec16ab6fbc15e432a8ef644dbb519f0a4dd256590fb6f9676ffceba4611" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.283625 4732 scope.go:117] "RemoveContainer" containerID="6a55e95bf79b05700a43bb7800d35ec7f862e81feb1caac0721521392f5f8e7f" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.300628 4732 scope.go:117] "RemoveContainer" containerID="7f58b9ade96eae3a4c9b405f187c769232309e0c1fb241bf5444c4c304ab649f" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.316891 4732 scope.go:117] "RemoveContainer" containerID="dbfd2c11334695f732524b533be449dd61ebf7d7637f32b3bcfe1d1941b37863" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.346271 4732 scope.go:117] "RemoveContainer" containerID="b63fa3de54dc547aaba5e3c717d85f707b81c7144e50e21b9e827db634951724" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.362798 4732 scope.go:117] "RemoveContainer" containerID="61cb3b1d16a1d1d363ee3738835e89a0f2f6d3986bf10918c00e484f7f8c9ef7" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.555605 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" path="/var/lib/kubelet/pods/20196d3e-600c-4a25-97ef-86f81bfae43b/volumes" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.558201 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410ee08c-4c6c-4012-aa46-264179923617" path="/var/lib/kubelet/pods/410ee08c-4c6c-4012-aa46-264179923617/volumes" Jan 31 09:18:30 crc kubenswrapper[4732]: I0131 09:18:30.560909 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" path="/var/lib/kubelet/pods/485e2c17-77f1-4b13-ad2a-1afe1034b82e/volumes" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456018 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456632 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456644 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456653 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456659 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456684 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456691 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456699 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456704 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456712 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456717 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456725 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="swift-recon-cron" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456731 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="swift-recon-cron" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456745 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="swift-recon-cron" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456751 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="swift-recon-cron" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456762 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456768 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456778 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456784 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456795 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456800 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-server" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456820 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456825 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456835 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-reaper" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456841 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-reaper" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456848 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456853 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-server" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456868 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="rsync" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456874 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="rsync" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456880 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456885 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-server" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456892 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456897 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456908 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456914 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456925 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e00e76d-2f89-454c-be3b-855e8186c78e" containerName="proxy-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456931 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e00e76d-2f89-454c-be3b-855e8186c78e" containerName="proxy-server" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456939 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456945 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456953 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-expirer" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456958 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-expirer" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456968 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-expirer" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456973 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-expirer" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456983 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.456989 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-server" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.456997 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457002 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457011 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3323fc69-96ba-4767-aca9-a094ee4511fa" containerName="swift-ring-rebalance" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457017 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="3323fc69-96ba-4767-aca9-a094ee4511fa" containerName="swift-ring-rebalance" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457027 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457032 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457040 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-reaper" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457047 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-reaper" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457055 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457060 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-server" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457070 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457075 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457084 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457090 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457100 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457106 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457113 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-expirer" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457118 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-expirer" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457124 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457130 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457137 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e00e76d-2f89-454c-be3b-855e8186c78e" containerName="proxy-httpd" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457143 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e00e76d-2f89-454c-be3b-855e8186c78e" containerName="proxy-httpd" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457152 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457158 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-server" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457165 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457171 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457179 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457185 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457191 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457197 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457205 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457211 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457218 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457224 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457232 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457238 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457247 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="rsync" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457253 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="rsync" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457263 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-reaper" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457269 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-reaper" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457275 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="swift-recon-cron" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457281 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="swift-recon-cron" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457290 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457295 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-server" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457305 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457310 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-server" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457320 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="rsync" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457325 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="rsync" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457334 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457340 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-server" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.457348 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457353 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457470 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457482 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-expirer" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457493 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457502 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457509 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457517 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e00e76d-2f89-454c-be3b-855e8186c78e" containerName="proxy-httpd" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457523 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457529 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-expirer" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457539 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457550 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457560 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457567 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457573 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457581 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e00e76d-2f89-454c-be3b-855e8186c78e" containerName="proxy-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457587 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457593 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-reaper" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457600 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457607 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457615 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457621 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="swift-recon-cron" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457629 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="rsync" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457636 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457645 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="object-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457653 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457680 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="object-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457689 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457696 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="swift-recon-cron" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457702 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="swift-recon-cron" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457709 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-reaper" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457714 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457721 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="object-expirer" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457729 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457737 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="rsync" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457743 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457752 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457758 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-reaper" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457767 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-updater" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457774 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457781 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="account-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457789 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457796 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="container-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457804 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457813 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="account-replicator" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457821 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="account-auditor" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457826 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="485e2c17-77f1-4b13-ad2a-1afe1034b82e" containerName="container-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457833 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="20196d3e-600c-4a25-97ef-86f81bfae43b" containerName="rsync" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457839 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="3323fc69-96ba-4767-aca9-a094ee4511fa" containerName="swift-ring-rebalance" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.457846 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ee08c-4c6c-4012-aa46-264179923617" containerName="container-server" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.461542 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.463355 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.463516 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-tc45w" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.463619 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.463785 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.482939 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.557961 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.558034 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-lock\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.558073 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5fb9\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-kube-api-access-b5fb9\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.558096 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.558116 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-cache\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.658977 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.659089 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-lock\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.659123 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5fb9\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-kube-api-access-b5fb9\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.659155 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.659169 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-cache\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.660648 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") device mount path \"/mnt/openstack/pv06\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.661967 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.661991 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:18:32 crc kubenswrapper[4732]: E0131 09:18:32.662040 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift podName:1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65 nodeName:}" failed. No retries permitted until 2026-01-31 09:18:33.162021602 +0000 UTC m=+1051.467897826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift") pod "swift-storage-0" (UID: "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65") : configmap "swift-ring-files" not found Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.662237 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-lock\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.662326 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-cache\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.679932 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5fb9\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-kube-api-access-b5fb9\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:32 crc kubenswrapper[4732]: I0131 09:18:32.692081 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:33 crc kubenswrapper[4732]: I0131 09:18:33.167802 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:33 crc kubenswrapper[4732]: E0131 09:18:33.168033 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:18:33 crc kubenswrapper[4732]: E0131 09:18:33.168071 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:18:33 crc kubenswrapper[4732]: E0131 09:18:33.168129 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift podName:1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65 nodeName:}" failed. No retries permitted until 2026-01-31 09:18:34.16811082 +0000 UTC m=+1052.473987024 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift") pod "swift-storage-0" (UID: "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65") : configmap "swift-ring-files" not found Jan 31 09:18:34 crc kubenswrapper[4732]: I0131 09:18:34.181455 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:34 crc kubenswrapper[4732]: E0131 09:18:34.181576 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:18:34 crc kubenswrapper[4732]: E0131 09:18:34.181598 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:18:34 crc kubenswrapper[4732]: E0131 09:18:34.181643 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift podName:1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65 nodeName:}" failed. No retries permitted until 2026-01-31 09:18:36.181627808 +0000 UTC m=+1054.487504022 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift") pod "swift-storage-0" (UID: "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65") : configmap "swift-ring-files" not found Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.210590 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:36 crc kubenswrapper[4732]: E0131 09:18:36.210895 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:18:36 crc kubenswrapper[4732]: E0131 09:18:36.211195 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:18:36 crc kubenswrapper[4732]: E0131 09:18:36.211266 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift podName:1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65 nodeName:}" failed. No retries permitted until 2026-01-31 09:18:40.211242953 +0000 UTC m=+1058.517119197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift") pod "swift-storage-0" (UID: "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65") : configmap "swift-ring-files" not found Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.346561 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-nc962"] Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.347563 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.350914 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.351183 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.353537 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.357940 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-nc962"] Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.413657 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-etc-swift\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.413740 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-ring-data-devices\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.413779 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-swiftconf\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.413830 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9fnr\" (UniqueName: \"kubernetes.io/projected/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-kube-api-access-w9fnr\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.413876 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-dispersionconf\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.413923 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-scripts\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.516009 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-dispersionconf\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.516119 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-scripts\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.516202 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-etc-swift\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.516260 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-ring-data-devices\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.516328 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-swiftconf\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.516411 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9fnr\" (UniqueName: \"kubernetes.io/projected/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-kube-api-access-w9fnr\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.517443 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-etc-swift\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.518442 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-scripts\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.518708 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-ring-data-devices\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.526337 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-dispersionconf\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.528348 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-swiftconf\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.537743 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9fnr\" (UniqueName: \"kubernetes.io/projected/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-kube-api-access-w9fnr\") pod \"swift-ring-rebalance-nc962\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.684983 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:36 crc kubenswrapper[4732]: I0131 09:18:36.934023 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-nc962"] Jan 31 09:18:37 crc kubenswrapper[4732]: I0131 09:18:37.585595 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-nc962" event={"ID":"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72","Type":"ContainerStarted","Data":"4202ea6b721143639a1a8ee464d8d295428596e358fa14506e850632ab62de19"} Jan 31 09:18:37 crc kubenswrapper[4732]: I0131 09:18:37.586037 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-nc962" event={"ID":"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72","Type":"ContainerStarted","Data":"dd29dd54690710629e25eeaca76d8853c1180c1e25413daf08017e46075f113c"} Jan 31 09:18:37 crc kubenswrapper[4732]: I0131 09:18:37.621411 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-nc962" podStartSLOduration=1.6213897720000001 podStartE2EDuration="1.621389772s" podCreationTimestamp="2026-01-31 09:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:18:37.610430711 +0000 UTC m=+1055.916306945" watchObservedRunningTime="2026-01-31 09:18:37.621389772 +0000 UTC m=+1055.927265986" Jan 31 09:18:40 crc kubenswrapper[4732]: I0131 09:18:40.273266 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:40 crc kubenswrapper[4732]: E0131 09:18:40.273550 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:18:40 crc kubenswrapper[4732]: E0131 09:18:40.273621 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:18:40 crc kubenswrapper[4732]: E0131 09:18:40.273817 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift podName:1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65 nodeName:}" failed. No retries permitted until 2026-01-31 09:18:48.273788917 +0000 UTC m=+1066.579665121 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift") pod "swift-storage-0" (UID: "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65") : configmap "swift-ring-files" not found Jan 31 09:18:44 crc kubenswrapper[4732]: I0131 09:18:44.636596 4732 generic.go:334] "Generic (PLEG): container finished" podID="69e75a6a-a8c3-4ea1-b9a9-418752e2fc72" containerID="4202ea6b721143639a1a8ee464d8d295428596e358fa14506e850632ab62de19" exitCode=0 Jan 31 09:18:44 crc kubenswrapper[4732]: I0131 09:18:44.636698 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-nc962" event={"ID":"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72","Type":"ContainerDied","Data":"4202ea6b721143639a1a8ee464d8d295428596e358fa14506e850632ab62de19"} Jan 31 09:18:45 crc kubenswrapper[4732]: I0131 09:18:45.917061 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:45 crc kubenswrapper[4732]: I0131 09:18:45.979220 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9fnr\" (UniqueName: \"kubernetes.io/projected/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-kube-api-access-w9fnr\") pod \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " Jan 31 09:18:45 crc kubenswrapper[4732]: I0131 09:18:45.979337 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-scripts\") pod \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " Jan 31 09:18:45 crc kubenswrapper[4732]: I0131 09:18:45.979390 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-ring-data-devices\") pod \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " Jan 31 09:18:45 crc kubenswrapper[4732]: I0131 09:18:45.979454 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-dispersionconf\") pod \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " Jan 31 09:18:45 crc kubenswrapper[4732]: I0131 09:18:45.979479 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-swiftconf\") pod \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " Jan 31 09:18:45 crc kubenswrapper[4732]: I0131 09:18:45.979560 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-etc-swift\") pod \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\" (UID: \"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72\") " Jan 31 09:18:45 crc kubenswrapper[4732]: I0131 09:18:45.980261 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "69e75a6a-a8c3-4ea1-b9a9-418752e2fc72" (UID: "69e75a6a-a8c3-4ea1-b9a9-418752e2fc72"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:18:45 crc kubenswrapper[4732]: I0131 09:18:45.980957 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "69e75a6a-a8c3-4ea1-b9a9-418752e2fc72" (UID: "69e75a6a-a8c3-4ea1-b9a9-418752e2fc72"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:18:45 crc kubenswrapper[4732]: I0131 09:18:45.987869 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-kube-api-access-w9fnr" (OuterVolumeSpecName: "kube-api-access-w9fnr") pod "69e75a6a-a8c3-4ea1-b9a9-418752e2fc72" (UID: "69e75a6a-a8c3-4ea1-b9a9-418752e2fc72"). InnerVolumeSpecName "kube-api-access-w9fnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:18:45 crc kubenswrapper[4732]: I0131 09:18:45.991816 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "69e75a6a-a8c3-4ea1-b9a9-418752e2fc72" (UID: "69e75a6a-a8c3-4ea1-b9a9-418752e2fc72"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:18:45 crc kubenswrapper[4732]: I0131 09:18:45.999596 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-scripts" (OuterVolumeSpecName: "scripts") pod "69e75a6a-a8c3-4ea1-b9a9-418752e2fc72" (UID: "69e75a6a-a8c3-4ea1-b9a9-418752e2fc72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:18:46 crc kubenswrapper[4732]: I0131 09:18:46.000751 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "69e75a6a-a8c3-4ea1-b9a9-418752e2fc72" (UID: "69e75a6a-a8c3-4ea1-b9a9-418752e2fc72"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:18:46 crc kubenswrapper[4732]: I0131 09:18:46.081002 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:46 crc kubenswrapper[4732]: I0131 09:18:46.081036 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:46 crc kubenswrapper[4732]: I0131 09:18:46.081046 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9fnr\" (UniqueName: \"kubernetes.io/projected/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-kube-api-access-w9fnr\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:46 crc kubenswrapper[4732]: I0131 09:18:46.081055 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:46 crc kubenswrapper[4732]: I0131 09:18:46.081064 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:46 crc kubenswrapper[4732]: I0131 09:18:46.081072 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:18:46 crc kubenswrapper[4732]: I0131 09:18:46.653857 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-nc962" event={"ID":"69e75a6a-a8c3-4ea1-b9a9-418752e2fc72","Type":"ContainerDied","Data":"dd29dd54690710629e25eeaca76d8853c1180c1e25413daf08017e46075f113c"} Jan 31 09:18:46 crc kubenswrapper[4732]: I0131 09:18:46.653906 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd29dd54690710629e25eeaca76d8853c1180c1e25413daf08017e46075f113c" Jan 31 09:18:46 crc kubenswrapper[4732]: I0131 09:18:46.653972 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-nc962" Jan 31 09:18:48 crc kubenswrapper[4732]: I0131 09:18:48.316938 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:48 crc kubenswrapper[4732]: I0131 09:18:48.326537 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift\") pod \"swift-storage-0\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:48 crc kubenswrapper[4732]: I0131 09:18:48.380495 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:18:48 crc kubenswrapper[4732]: I0131 09:18:48.886803 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:18:49 crc kubenswrapper[4732]: I0131 09:18:49.684938 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"a768c10034a64a5a20c9c6d0006c491e9b64bcedd25898cbbfa5a7d80612ad9a"} Jan 31 09:18:49 crc kubenswrapper[4732]: I0131 09:18:49.685186 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"3c0e4beb7194a2ba16fa5a8867712d4fc5df7cc2c8e674527846af12764e0327"} Jan 31 09:18:49 crc kubenswrapper[4732]: I0131 09:18:49.685199 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"14b965bc1ec24ff789b68d1102389631b9549153560571ce9d36806cdfc47521"} Jan 31 09:18:49 crc kubenswrapper[4732]: I0131 09:18:49.685209 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"59ef299c22aaa5865a8f99313b8748ea14863e08f6ff3878d52b70ef3516ec1f"} Jan 31 09:18:49 crc kubenswrapper[4732]: I0131 09:18:49.685227 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"76755a52198a018f0a2d0f0ab371558b237a7993168d70acf02d525f01693f21"} Jan 31 09:18:49 crc kubenswrapper[4732]: I0131 09:18:49.685235 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"34458a4f4a5d5e0c1ee442a308b559938e3621f6dae5595249f026b6e797962d"} Jan 31 09:18:50 crc kubenswrapper[4732]: I0131 09:18:50.703675 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"dd273c25d7dc91e5954ff8d5fbc44b6eeefa5af49ff09461c024f0d29819d844"} Jan 31 09:18:50 crc kubenswrapper[4732]: I0131 09:18:50.704029 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"adf3519dd1a3dc4b8b31ed35e741ea5a2ae39875837269f96a619bf6446696ad"} Jan 31 09:18:50 crc kubenswrapper[4732]: I0131 09:18:50.704914 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"9c260d5b071de9f983a884926c74e3c7965a5566fdf129a8231af8aff9341849"} Jan 31 09:18:50 crc kubenswrapper[4732]: I0131 09:18:50.704945 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"c4fa65606e0bdcb6f86bd4340763a51bfe2fa0b29c86e336932e2d7b4d5a7cc7"} Jan 31 09:18:50 crc kubenswrapper[4732]: I0131 09:18:50.704959 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"31889d60dae084e50ec608b663e4d80dd770e7914146b1993679075fbcb5c050"} Jan 31 09:18:50 crc kubenswrapper[4732]: I0131 09:18:50.705217 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"5de79c9073a05f38da4352247b5e904a357945349a71ed92906b206964dbd2ef"} Jan 31 09:18:51 crc kubenswrapper[4732]: I0131 09:18:51.716898 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"38cdf7c7cb89df8d1c3787fb632d47d09b331d20b08aa93cb17f6a88341fb337"} Jan 31 09:18:51 crc kubenswrapper[4732]: I0131 09:18:51.717214 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"1183d753033ef14dd779291a910acae735ec0ef3a1e7104272c1b8d040389479"} Jan 31 09:18:51 crc kubenswrapper[4732]: I0131 09:18:51.717229 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"5521d09393a37a16e0269ff010dbf8c3432cdb69f1bca2e5b6f057535e6adaff"} Jan 31 09:18:51 crc kubenswrapper[4732]: I0131 09:18:51.717239 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"4c4fd17a487c3ad507bed8e66df924a39a8c2f99f7e806c1497f9605883141b8"} Jan 31 09:18:51 crc kubenswrapper[4732]: I0131 09:18:51.717248 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerStarted","Data":"4ff9ae6192a8f62a8feaec1a934822bc986fef6a2a789c88bf3fe4a983cc5f91"} Jan 31 09:18:51 crc kubenswrapper[4732]: I0131 09:18:51.755815 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=20.755796021 podStartE2EDuration="20.755796021s" podCreationTimestamp="2026-01-31 09:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:18:51.750915939 +0000 UTC m=+1070.056792143" watchObservedRunningTime="2026-01-31 09:18:51.755796021 +0000 UTC m=+1070.061672215" Jan 31 09:18:57 crc kubenswrapper[4732]: I0131 09:18:57.827057 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq"] Jan 31 09:18:57 crc kubenswrapper[4732]: E0131 09:18:57.827781 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e75a6a-a8c3-4ea1-b9a9-418752e2fc72" containerName="swift-ring-rebalance" Jan 31 09:18:57 crc kubenswrapper[4732]: I0131 09:18:57.827795 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e75a6a-a8c3-4ea1-b9a9-418752e2fc72" containerName="swift-ring-rebalance" Jan 31 09:18:57 crc kubenswrapper[4732]: I0131 09:18:57.827918 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e75a6a-a8c3-4ea1-b9a9-418752e2fc72" containerName="swift-ring-rebalance" Jan 31 09:18:57 crc kubenswrapper[4732]: I0131 09:18:57.828558 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:57 crc kubenswrapper[4732]: I0131 09:18:57.831099 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 09:18:57 crc kubenswrapper[4732]: I0131 09:18:57.856925 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq"] Jan 31 09:18:57 crc kubenswrapper[4732]: I0131 09:18:57.971112 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4pgj\" (UniqueName: \"kubernetes.io/projected/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-kube-api-access-c4pgj\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:57 crc kubenswrapper[4732]: I0131 09:18:57.971211 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-log-httpd\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:57 crc kubenswrapper[4732]: I0131 09:18:57.971398 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-run-httpd\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:57 crc kubenswrapper[4732]: I0131 09:18:57.971463 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-config-data\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:57 crc kubenswrapper[4732]: I0131 09:18:57.971519 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-etc-swift\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:58 crc kubenswrapper[4732]: I0131 09:18:58.072828 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-config-data\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:58 crc kubenswrapper[4732]: I0131 09:18:58.072917 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-etc-swift\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:58 crc kubenswrapper[4732]: I0131 09:18:58.072978 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4pgj\" (UniqueName: \"kubernetes.io/projected/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-kube-api-access-c4pgj\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:58 crc kubenswrapper[4732]: I0131 09:18:58.073008 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-log-httpd\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:58 crc kubenswrapper[4732]: I0131 09:18:58.073089 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-run-httpd\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:58 crc kubenswrapper[4732]: I0131 09:18:58.073742 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-run-httpd\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:58 crc kubenswrapper[4732]: I0131 09:18:58.073832 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-log-httpd\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:58 crc kubenswrapper[4732]: I0131 09:18:58.078615 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-config-data\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:58 crc kubenswrapper[4732]: I0131 09:18:58.078682 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-etc-swift\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:58 crc kubenswrapper[4732]: I0131 09:18:58.099069 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4pgj\" (UniqueName: \"kubernetes.io/projected/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-kube-api-access-c4pgj\") pod \"swift-proxy-59dd5dbb7c-lcxsq\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:58 crc kubenswrapper[4732]: I0131 09:18:58.155277 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:58 crc kubenswrapper[4732]: I0131 09:18:58.567724 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq"] Jan 31 09:18:58 crc kubenswrapper[4732]: I0131 09:18:58.790726 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" event={"ID":"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9","Type":"ContainerStarted","Data":"84936fd9e60dfd8bff06c859b11fa4812f19d8ba83da60cc1b9233b2901d5e51"} Jan 31 09:18:59 crc kubenswrapper[4732]: I0131 09:18:59.802547 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" event={"ID":"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9","Type":"ContainerStarted","Data":"14e28a091b47ebba4b423f5066db9e9d08dac2ef7f479ee58c476aa4be9c8628"} Jan 31 09:18:59 crc kubenswrapper[4732]: I0131 09:18:59.802904 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:18:59 crc kubenswrapper[4732]: I0131 09:18:59.802917 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" event={"ID":"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9","Type":"ContainerStarted","Data":"7645a81d1554d39a7245fbb85e3b78b90ce8cbb1ff6a3161695b88d390dd384c"} Jan 31 09:18:59 crc kubenswrapper[4732]: I0131 09:18:59.842178 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" podStartSLOduration=2.842152673 podStartE2EDuration="2.842152673s" podCreationTimestamp="2026-01-31 09:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:18:59.833150183 +0000 UTC m=+1078.139026387" watchObservedRunningTime="2026-01-31 09:18:59.842152673 +0000 UTC m=+1078.148028877" Jan 31 09:19:00 crc kubenswrapper[4732]: I0131 09:19:00.809119 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:19:03 crc kubenswrapper[4732]: I0131 09:19:03.161890 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:19:08 crc kubenswrapper[4732]: I0131 09:19:08.158413 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:19:09 crc kubenswrapper[4732]: I0131 09:19:09.808930 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-p42tv"] Jan 31 09:19:09 crc kubenswrapper[4732]: I0131 09:19:09.830657 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:09 crc kubenswrapper[4732]: I0131 09:19:09.833773 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 09:19:09 crc kubenswrapper[4732]: I0131 09:19:09.833828 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 09:19:09 crc kubenswrapper[4732]: I0131 09:19:09.838519 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-p42tv"] Jan 31 09:19:09 crc kubenswrapper[4732]: I0131 09:19:09.948541 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmz5c\" (UniqueName: \"kubernetes.io/projected/71a2159f-3ae8-40c1-8404-a58106192d87-kube-api-access-mmz5c\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:09 crc kubenswrapper[4732]: I0131 09:19:09.948617 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/71a2159f-3ae8-40c1-8404-a58106192d87-ring-data-devices\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:09 crc kubenswrapper[4732]: I0131 09:19:09.948648 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/71a2159f-3ae8-40c1-8404-a58106192d87-etc-swift\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:09 crc kubenswrapper[4732]: I0131 09:19:09.948686 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71a2159f-3ae8-40c1-8404-a58106192d87-scripts\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:09 crc kubenswrapper[4732]: I0131 09:19:09.948705 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/71a2159f-3ae8-40c1-8404-a58106192d87-dispersionconf\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:09 crc kubenswrapper[4732]: I0131 09:19:09.948732 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/71a2159f-3ae8-40c1-8404-a58106192d87-swiftconf\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.050122 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmz5c\" (UniqueName: \"kubernetes.io/projected/71a2159f-3ae8-40c1-8404-a58106192d87-kube-api-access-mmz5c\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.050217 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/71a2159f-3ae8-40c1-8404-a58106192d87-ring-data-devices\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.050285 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/71a2159f-3ae8-40c1-8404-a58106192d87-etc-swift\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.050318 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71a2159f-3ae8-40c1-8404-a58106192d87-scripts\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.050341 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/71a2159f-3ae8-40c1-8404-a58106192d87-dispersionconf\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.050381 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/71a2159f-3ae8-40c1-8404-a58106192d87-swiftconf\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.050812 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/71a2159f-3ae8-40c1-8404-a58106192d87-etc-swift\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.051120 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/71a2159f-3ae8-40c1-8404-a58106192d87-ring-data-devices\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.051241 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71a2159f-3ae8-40c1-8404-a58106192d87-scripts\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.056478 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/71a2159f-3ae8-40c1-8404-a58106192d87-dispersionconf\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.059232 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/71a2159f-3ae8-40c1-8404-a58106192d87-swiftconf\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.072636 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmz5c\" (UniqueName: \"kubernetes.io/projected/71a2159f-3ae8-40c1-8404-a58106192d87-kube-api-access-mmz5c\") pod \"swift-ring-rebalance-debug-p42tv\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.198517 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.606801 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-p42tv"] Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.885648 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" event={"ID":"71a2159f-3ae8-40c1-8404-a58106192d87","Type":"ContainerStarted","Data":"1fdd9ffe15b68f781c0060a8dc3d52561429e6211de1a33c677c7ae5bc16192d"} Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.885710 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" event={"ID":"71a2159f-3ae8-40c1-8404-a58106192d87","Type":"ContainerStarted","Data":"1a981eb19d5e56073ba342ee44e1dc63200aae3ea6637c97ff5969c7277df29e"} Jan 31 09:19:10 crc kubenswrapper[4732]: I0131 09:19:10.905602 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" podStartSLOduration=1.905584253 podStartE2EDuration="1.905584253s" podCreationTimestamp="2026-01-31 09:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:19:10.897843552 +0000 UTC m=+1089.203719776" watchObservedRunningTime="2026-01-31 09:19:10.905584253 +0000 UTC m=+1089.211460477" Jan 31 09:19:12 crc kubenswrapper[4732]: I0131 09:19:12.904897 4732 generic.go:334] "Generic (PLEG): container finished" podID="71a2159f-3ae8-40c1-8404-a58106192d87" containerID="1fdd9ffe15b68f781c0060a8dc3d52561429e6211de1a33c677c7ae5bc16192d" exitCode=0 Jan 31 09:19:12 crc kubenswrapper[4732]: I0131 09:19:12.905290 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" event={"ID":"71a2159f-3ae8-40c1-8404-a58106192d87","Type":"ContainerDied","Data":"1fdd9ffe15b68f781c0060a8dc3d52561429e6211de1a33c677c7ae5bc16192d"} Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.208131 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.260878 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-p42tv"] Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.269164 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-p42tv"] Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.311603 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/71a2159f-3ae8-40c1-8404-a58106192d87-etc-swift\") pod \"71a2159f-3ae8-40c1-8404-a58106192d87\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.311678 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmz5c\" (UniqueName: \"kubernetes.io/projected/71a2159f-3ae8-40c1-8404-a58106192d87-kube-api-access-mmz5c\") pod \"71a2159f-3ae8-40c1-8404-a58106192d87\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.311702 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/71a2159f-3ae8-40c1-8404-a58106192d87-dispersionconf\") pod \"71a2159f-3ae8-40c1-8404-a58106192d87\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.311730 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71a2159f-3ae8-40c1-8404-a58106192d87-scripts\") pod \"71a2159f-3ae8-40c1-8404-a58106192d87\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.311763 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/71a2159f-3ae8-40c1-8404-a58106192d87-ring-data-devices\") pod \"71a2159f-3ae8-40c1-8404-a58106192d87\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.311878 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/71a2159f-3ae8-40c1-8404-a58106192d87-swiftconf\") pod \"71a2159f-3ae8-40c1-8404-a58106192d87\" (UID: \"71a2159f-3ae8-40c1-8404-a58106192d87\") " Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.315155 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a2159f-3ae8-40c1-8404-a58106192d87-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "71a2159f-3ae8-40c1-8404-a58106192d87" (UID: "71a2159f-3ae8-40c1-8404-a58106192d87"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.315966 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71a2159f-3ae8-40c1-8404-a58106192d87-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "71a2159f-3ae8-40c1-8404-a58106192d87" (UID: "71a2159f-3ae8-40c1-8404-a58106192d87"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.320585 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a2159f-3ae8-40c1-8404-a58106192d87-kube-api-access-mmz5c" (OuterVolumeSpecName: "kube-api-access-mmz5c") pod "71a2159f-3ae8-40c1-8404-a58106192d87" (UID: "71a2159f-3ae8-40c1-8404-a58106192d87"). InnerVolumeSpecName "kube-api-access-mmz5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.334347 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a2159f-3ae8-40c1-8404-a58106192d87-scripts" (OuterVolumeSpecName: "scripts") pod "71a2159f-3ae8-40c1-8404-a58106192d87" (UID: "71a2159f-3ae8-40c1-8404-a58106192d87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.335343 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a2159f-3ae8-40c1-8404-a58106192d87-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "71a2159f-3ae8-40c1-8404-a58106192d87" (UID: "71a2159f-3ae8-40c1-8404-a58106192d87"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.335429 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a2159f-3ae8-40c1-8404-a58106192d87-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "71a2159f-3ae8-40c1-8404-a58106192d87" (UID: "71a2159f-3ae8-40c1-8404-a58106192d87"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.414216 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/71a2159f-3ae8-40c1-8404-a58106192d87-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.414263 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/71a2159f-3ae8-40c1-8404-a58106192d87-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.414282 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmz5c\" (UniqueName: \"kubernetes.io/projected/71a2159f-3ae8-40c1-8404-a58106192d87-kube-api-access-mmz5c\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.414303 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/71a2159f-3ae8-40c1-8404-a58106192d87-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.414320 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71a2159f-3ae8-40c1-8404-a58106192d87-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.414335 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/71a2159f-3ae8-40c1-8404-a58106192d87-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.432849 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq"] Jan 31 09:19:14 crc kubenswrapper[4732]: E0131 09:19:14.433099 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a2159f-3ae8-40c1-8404-a58106192d87" containerName="swift-ring-rebalance" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.433110 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a2159f-3ae8-40c1-8404-a58106192d87" containerName="swift-ring-rebalance" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.433254 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a2159f-3ae8-40c1-8404-a58106192d87" containerName="swift-ring-rebalance" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.433779 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.444607 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq"] Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.515153 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7d0d843f-2091-43b9-a56c-a6e894f34c6a-swiftconf\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.515214 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7d0d843f-2091-43b9-a56c-a6e894f34c6a-dispersionconf\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.515287 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfx8r\" (UniqueName: \"kubernetes.io/projected/7d0d843f-2091-43b9-a56c-a6e894f34c6a-kube-api-access-rfx8r\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.515317 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d0d843f-2091-43b9-a56c-a6e894f34c6a-scripts\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.515339 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7d0d843f-2091-43b9-a56c-a6e894f34c6a-etc-swift\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.515364 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7d0d843f-2091-43b9-a56c-a6e894f34c6a-ring-data-devices\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.560401 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a2159f-3ae8-40c1-8404-a58106192d87" path="/var/lib/kubelet/pods/71a2159f-3ae8-40c1-8404-a58106192d87/volumes" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.616740 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfx8r\" (UniqueName: \"kubernetes.io/projected/7d0d843f-2091-43b9-a56c-a6e894f34c6a-kube-api-access-rfx8r\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.616800 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d0d843f-2091-43b9-a56c-a6e894f34c6a-scripts\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.616839 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7d0d843f-2091-43b9-a56c-a6e894f34c6a-etc-swift\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.616871 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7d0d843f-2091-43b9-a56c-a6e894f34c6a-ring-data-devices\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.616999 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7d0d843f-2091-43b9-a56c-a6e894f34c6a-swiftconf\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.617047 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7d0d843f-2091-43b9-a56c-a6e894f34c6a-dispersionconf\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.617633 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d0d843f-2091-43b9-a56c-a6e894f34c6a-scripts\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.618473 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7d0d843f-2091-43b9-a56c-a6e894f34c6a-etc-swift\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.618476 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7d0d843f-2091-43b9-a56c-a6e894f34c6a-ring-data-devices\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.626424 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7d0d843f-2091-43b9-a56c-a6e894f34c6a-dispersionconf\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.633115 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7d0d843f-2091-43b9-a56c-a6e894f34c6a-swiftconf\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.634367 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfx8r\" (UniqueName: \"kubernetes.io/projected/7d0d843f-2091-43b9-a56c-a6e894f34c6a-kube-api-access-rfx8r\") pod \"swift-ring-rebalance-debug-xbvzq\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.763639 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.936486 4732 scope.go:117] "RemoveContainer" containerID="1fdd9ffe15b68f781c0060a8dc3d52561429e6211de1a33c677c7ae5bc16192d" Jan 31 09:19:14 crc kubenswrapper[4732]: I0131 09:19:14.936796 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-p42tv" Jan 31 09:19:15 crc kubenswrapper[4732]: I0131 09:19:15.062235 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq"] Jan 31 09:19:15 crc kubenswrapper[4732]: W0131 09:19:15.073582 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d0d843f_2091_43b9_a56c_a6e894f34c6a.slice/crio-0874ee788cc7a0201916119853527e285b142fd7b705ec112d80d8db9d6f6a97 WatchSource:0}: Error finding container 0874ee788cc7a0201916119853527e285b142fd7b705ec112d80d8db9d6f6a97: Status 404 returned error can't find the container with id 0874ee788cc7a0201916119853527e285b142fd7b705ec112d80d8db9d6f6a97 Jan 31 09:19:15 crc kubenswrapper[4732]: I0131 09:19:15.948651 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" event={"ID":"7d0d843f-2091-43b9-a56c-a6e894f34c6a","Type":"ContainerStarted","Data":"8f7b4935507d9f1d9a228ddbccd77963bc16505fe67a6b62a43d38a59973a92f"} Jan 31 09:19:15 crc kubenswrapper[4732]: I0131 09:19:15.949139 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" event={"ID":"7d0d843f-2091-43b9-a56c-a6e894f34c6a","Type":"ContainerStarted","Data":"0874ee788cc7a0201916119853527e285b142fd7b705ec112d80d8db9d6f6a97"} Jan 31 09:19:15 crc kubenswrapper[4732]: I0131 09:19:15.987088 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" podStartSLOduration=1.987062153 podStartE2EDuration="1.987062153s" podCreationTimestamp="2026-01-31 09:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:19:15.975009188 +0000 UTC m=+1094.280885432" watchObservedRunningTime="2026-01-31 09:19:15.987062153 +0000 UTC m=+1094.292938397" Jan 31 09:19:16 crc kubenswrapper[4732]: I0131 09:19:16.961886 4732 generic.go:334] "Generic (PLEG): container finished" podID="7d0d843f-2091-43b9-a56c-a6e894f34c6a" containerID="8f7b4935507d9f1d9a228ddbccd77963bc16505fe67a6b62a43d38a59973a92f" exitCode=0 Jan 31 09:19:16 crc kubenswrapper[4732]: I0131 09:19:16.962027 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" event={"ID":"7d0d843f-2091-43b9-a56c-a6e894f34c6a","Type":"ContainerDied","Data":"8f7b4935507d9f1d9a228ddbccd77963bc16505fe67a6b62a43d38a59973a92f"} Jan 31 09:19:17 crc kubenswrapper[4732]: I0131 09:19:17.498165 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:19:17 crc kubenswrapper[4732]: I0131 09:19:17.498228 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.294979 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.342933 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq"] Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.356010 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq"] Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.370394 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7d0d843f-2091-43b9-a56c-a6e894f34c6a-ring-data-devices\") pod \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.370452 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7d0d843f-2091-43b9-a56c-a6e894f34c6a-dispersionconf\") pod \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.370577 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7d0d843f-2091-43b9-a56c-a6e894f34c6a-swiftconf\") pod \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.370623 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7d0d843f-2091-43b9-a56c-a6e894f34c6a-etc-swift\") pod \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.370689 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d0d843f-2091-43b9-a56c-a6e894f34c6a-scripts\") pod \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.370791 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfx8r\" (UniqueName: \"kubernetes.io/projected/7d0d843f-2091-43b9-a56c-a6e894f34c6a-kube-api-access-rfx8r\") pod \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\" (UID: \"7d0d843f-2091-43b9-a56c-a6e894f34c6a\") " Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.371189 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0d843f-2091-43b9-a56c-a6e894f34c6a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7d0d843f-2091-43b9-a56c-a6e894f34c6a" (UID: "7d0d843f-2091-43b9-a56c-a6e894f34c6a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.371970 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d0d843f-2091-43b9-a56c-a6e894f34c6a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7d0d843f-2091-43b9-a56c-a6e894f34c6a" (UID: "7d0d843f-2091-43b9-a56c-a6e894f34c6a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.375646 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0d843f-2091-43b9-a56c-a6e894f34c6a-kube-api-access-rfx8r" (OuterVolumeSpecName: "kube-api-access-rfx8r") pod "7d0d843f-2091-43b9-a56c-a6e894f34c6a" (UID: "7d0d843f-2091-43b9-a56c-a6e894f34c6a"). InnerVolumeSpecName "kube-api-access-rfx8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.389022 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0d843f-2091-43b9-a56c-a6e894f34c6a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7d0d843f-2091-43b9-a56c-a6e894f34c6a" (UID: "7d0d843f-2091-43b9-a56c-a6e894f34c6a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.401022 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0d843f-2091-43b9-a56c-a6e894f34c6a-scripts" (OuterVolumeSpecName: "scripts") pod "7d0d843f-2091-43b9-a56c-a6e894f34c6a" (UID: "7d0d843f-2091-43b9-a56c-a6e894f34c6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.405849 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0d843f-2091-43b9-a56c-a6e894f34c6a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7d0d843f-2091-43b9-a56c-a6e894f34c6a" (UID: "7d0d843f-2091-43b9-a56c-a6e894f34c6a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.473171 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfx8r\" (UniqueName: \"kubernetes.io/projected/7d0d843f-2091-43b9-a56c-a6e894f34c6a-kube-api-access-rfx8r\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.473327 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7d0d843f-2091-43b9-a56c-a6e894f34c6a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.473357 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7d0d843f-2091-43b9-a56c-a6e894f34c6a-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.473382 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7d0d843f-2091-43b9-a56c-a6e894f34c6a-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.473405 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7d0d843f-2091-43b9-a56c-a6e894f34c6a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.473423 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d0d843f-2091-43b9-a56c-a6e894f34c6a-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.555829 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d0d843f-2091-43b9-a56c-a6e894f34c6a" path="/var/lib/kubelet/pods/7d0d843f-2091-43b9-a56c-a6e894f34c6a/volumes" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.982925 4732 scope.go:117] "RemoveContainer" containerID="8f7b4935507d9f1d9a228ddbccd77963bc16505fe67a6b62a43d38a59973a92f" Jan 31 09:19:18 crc kubenswrapper[4732]: I0131 09:19:18.982988 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xbvzq" Jan 31 09:19:20 crc kubenswrapper[4732]: I0131 09:19:20.955400 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gblbk"] Jan 31 09:19:20 crc kubenswrapper[4732]: E0131 09:19:20.955797 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0d843f-2091-43b9-a56c-a6e894f34c6a" containerName="swift-ring-rebalance" Jan 31 09:19:20 crc kubenswrapper[4732]: I0131 09:19:20.955816 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0d843f-2091-43b9-a56c-a6e894f34c6a" containerName="swift-ring-rebalance" Jan 31 09:19:20 crc kubenswrapper[4732]: I0131 09:19:20.955974 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0d843f-2091-43b9-a56c-a6e894f34c6a" containerName="swift-ring-rebalance" Jan 31 09:19:20 crc kubenswrapper[4732]: I0131 09:19:20.956544 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:20 crc kubenswrapper[4732]: I0131 09:19:20.959548 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 09:19:20 crc kubenswrapper[4732]: I0131 09:19:20.959723 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 09:19:20 crc kubenswrapper[4732]: I0131 09:19:20.965906 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gblbk"] Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.111729 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38fc93ea-e490-4d30-a742-97b668a286c5-swiftconf\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.111821 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38fc93ea-e490-4d30-a742-97b668a286c5-scripts\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.111853 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s7fl\" (UniqueName: \"kubernetes.io/projected/38fc93ea-e490-4d30-a742-97b668a286c5-kube-api-access-2s7fl\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.111893 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38fc93ea-e490-4d30-a742-97b668a286c5-etc-swift\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.111954 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38fc93ea-e490-4d30-a742-97b668a286c5-ring-data-devices\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.112405 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38fc93ea-e490-4d30-a742-97b668a286c5-dispersionconf\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.213527 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38fc93ea-e490-4d30-a742-97b668a286c5-ring-data-devices\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.213621 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38fc93ea-e490-4d30-a742-97b668a286c5-dispersionconf\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.213700 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38fc93ea-e490-4d30-a742-97b668a286c5-swiftconf\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.213737 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38fc93ea-e490-4d30-a742-97b668a286c5-scripts\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.213761 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s7fl\" (UniqueName: \"kubernetes.io/projected/38fc93ea-e490-4d30-a742-97b668a286c5-kube-api-access-2s7fl\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.213796 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38fc93ea-e490-4d30-a742-97b668a286c5-etc-swift\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.214588 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38fc93ea-e490-4d30-a742-97b668a286c5-etc-swift\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.214859 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38fc93ea-e490-4d30-a742-97b668a286c5-scripts\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.215295 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38fc93ea-e490-4d30-a742-97b668a286c5-ring-data-devices\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.221461 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38fc93ea-e490-4d30-a742-97b668a286c5-dispersionconf\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.224121 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38fc93ea-e490-4d30-a742-97b668a286c5-swiftconf\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.232098 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s7fl\" (UniqueName: \"kubernetes.io/projected/38fc93ea-e490-4d30-a742-97b668a286c5-kube-api-access-2s7fl\") pod \"swift-ring-rebalance-debug-gblbk\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.287847 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:21 crc kubenswrapper[4732]: I0131 09:19:21.730270 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gblbk"] Jan 31 09:19:22 crc kubenswrapper[4732]: I0131 09:19:22.024105 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" event={"ID":"38fc93ea-e490-4d30-a742-97b668a286c5","Type":"ContainerStarted","Data":"432b5c5eba1b55f33733f64fa26fd00fcd2ddf085c54e19e85cffa05ca5842c9"} Jan 31 09:19:22 crc kubenswrapper[4732]: I0131 09:19:22.024156 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" event={"ID":"38fc93ea-e490-4d30-a742-97b668a286c5","Type":"ContainerStarted","Data":"18daffe26932d061f3f7f163a1c31a80af9593e793c13d3e5a4189a5cdd737d4"} Jan 31 09:19:22 crc kubenswrapper[4732]: I0131 09:19:22.041009 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" podStartSLOduration=2.040988762 podStartE2EDuration="2.040988762s" podCreationTimestamp="2026-01-31 09:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:19:22.038489334 +0000 UTC m=+1100.344365538" watchObservedRunningTime="2026-01-31 09:19:22.040988762 +0000 UTC m=+1100.346864966" Jan 31 09:19:23 crc kubenswrapper[4732]: I0131 09:19:23.031709 4732 generic.go:334] "Generic (PLEG): container finished" podID="38fc93ea-e490-4d30-a742-97b668a286c5" containerID="432b5c5eba1b55f33733f64fa26fd00fcd2ddf085c54e19e85cffa05ca5842c9" exitCode=0 Jan 31 09:19:23 crc kubenswrapper[4732]: I0131 09:19:23.031790 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" event={"ID":"38fc93ea-e490-4d30-a742-97b668a286c5","Type":"ContainerDied","Data":"432b5c5eba1b55f33733f64fa26fd00fcd2ddf085c54e19e85cffa05ca5842c9"} Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.367837 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.407495 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gblbk"] Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.416557 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gblbk"] Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.462865 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38fc93ea-e490-4d30-a742-97b668a286c5-ring-data-devices\") pod \"38fc93ea-e490-4d30-a742-97b668a286c5\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.462940 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38fc93ea-e490-4d30-a742-97b668a286c5-scripts\") pod \"38fc93ea-e490-4d30-a742-97b668a286c5\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.463089 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38fc93ea-e490-4d30-a742-97b668a286c5-dispersionconf\") pod \"38fc93ea-e490-4d30-a742-97b668a286c5\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.463130 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38fc93ea-e490-4d30-a742-97b668a286c5-etc-swift\") pod \"38fc93ea-e490-4d30-a742-97b668a286c5\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.463168 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38fc93ea-e490-4d30-a742-97b668a286c5-swiftconf\") pod \"38fc93ea-e490-4d30-a742-97b668a286c5\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.463270 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s7fl\" (UniqueName: \"kubernetes.io/projected/38fc93ea-e490-4d30-a742-97b668a286c5-kube-api-access-2s7fl\") pod \"38fc93ea-e490-4d30-a742-97b668a286c5\" (UID: \"38fc93ea-e490-4d30-a742-97b668a286c5\") " Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.463318 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fc93ea-e490-4d30-a742-97b668a286c5-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "38fc93ea-e490-4d30-a742-97b668a286c5" (UID: "38fc93ea-e490-4d30-a742-97b668a286c5"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.463622 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/38fc93ea-e490-4d30-a742-97b668a286c5-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.463968 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38fc93ea-e490-4d30-a742-97b668a286c5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "38fc93ea-e490-4d30-a742-97b668a286c5" (UID: "38fc93ea-e490-4d30-a742-97b668a286c5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.481949 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fc93ea-e490-4d30-a742-97b668a286c5-kube-api-access-2s7fl" (OuterVolumeSpecName: "kube-api-access-2s7fl") pod "38fc93ea-e490-4d30-a742-97b668a286c5" (UID: "38fc93ea-e490-4d30-a742-97b668a286c5"). InnerVolumeSpecName "kube-api-access-2s7fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.513514 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fc93ea-e490-4d30-a742-97b668a286c5-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "38fc93ea-e490-4d30-a742-97b668a286c5" (UID: "38fc93ea-e490-4d30-a742-97b668a286c5"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.521268 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-nc962"] Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.521424 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fc93ea-e490-4d30-a742-97b668a286c5-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "38fc93ea-e490-4d30-a742-97b668a286c5" (UID: "38fc93ea-e490-4d30-a742-97b668a286c5"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.527543 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fc93ea-e490-4d30-a742-97b668a286c5-scripts" (OuterVolumeSpecName: "scripts") pod "38fc93ea-e490-4d30-a742-97b668a286c5" (UID: "38fc93ea-e490-4d30-a742-97b668a286c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.532396 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-nc962"] Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.568715 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38fc93ea-e490-4d30-a742-97b668a286c5-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.568751 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/38fc93ea-e490-4d30-a742-97b668a286c5-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.568763 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/38fc93ea-e490-4d30-a742-97b668a286c5-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.568772 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/38fc93ea-e490-4d30-a742-97b668a286c5-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.568785 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s7fl\" (UniqueName: \"kubernetes.io/projected/38fc93ea-e490-4d30-a742-97b668a286c5-kube-api-access-2s7fl\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.592529 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38fc93ea-e490-4d30-a742-97b668a286c5" path="/var/lib/kubelet/pods/38fc93ea-e490-4d30-a742-97b668a286c5/volumes" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.593151 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e75a6a-a8c3-4ea1-b9a9-418752e2fc72" path="/var/lib/kubelet/pods/69e75a6a-a8c3-4ea1-b9a9-418752e2fc72/volumes" Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.593886 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.593924 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq"] Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.594118 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" podUID="cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" containerName="proxy-httpd" containerID="cri-o://7645a81d1554d39a7245fbb85e3b78b90ce8cbb1ff6a3161695b88d390dd384c" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.594520 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" podUID="cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" containerName="proxy-server" containerID="cri-o://14e28a091b47ebba4b423f5066db9e9d08dac2ef7f479ee58c476aa4be9c8628" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.595380 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-server" containerID="cri-o://76755a52198a018f0a2d0f0ab371558b237a7993168d70acf02d525f01693f21" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.595581 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-sharder" containerID="cri-o://38cdf7c7cb89df8d1c3787fb632d47d09b331d20b08aa93cb17f6a88341fb337" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.595700 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="swift-recon-cron" containerID="cri-o://1183d753033ef14dd779291a910acae735ec0ef3a1e7104272c1b8d040389479" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.595779 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="rsync" containerID="cri-o://5521d09393a37a16e0269ff010dbf8c3432cdb69f1bca2e5b6f057535e6adaff" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.595849 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-expirer" containerID="cri-o://4c4fd17a487c3ad507bed8e66df924a39a8c2f99f7e806c1497f9605883141b8" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.595927 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-updater" containerID="cri-o://4ff9ae6192a8f62a8feaec1a934822bc986fef6a2a789c88bf3fe4a983cc5f91" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.596028 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-auditor" containerID="cri-o://dd273c25d7dc91e5954ff8d5fbc44b6eeefa5af49ff09461c024f0d29819d844" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.596105 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-replicator" containerID="cri-o://adf3519dd1a3dc4b8b31ed35e741ea5a2ae39875837269f96a619bf6446696ad" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.596224 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-server" containerID="cri-o://a768c10034a64a5a20c9c6d0006c491e9b64bcedd25898cbbfa5a7d80612ad9a" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.596240 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-replicator" containerID="cri-o://5de79c9073a05f38da4352247b5e904a357945349a71ed92906b206964dbd2ef" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.596347 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-reaper" containerID="cri-o://3c0e4beb7194a2ba16fa5a8867712d4fc5df7cc2c8e674527846af12764e0327" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.596422 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-server" containerID="cri-o://9c260d5b071de9f983a884926c74e3c7965a5566fdf129a8231af8aff9341849" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.596439 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-auditor" containerID="cri-o://14b965bc1ec24ff789b68d1102389631b9549153560571ce9d36806cdfc47521" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.596550 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-replicator" containerID="cri-o://59ef299c22aaa5865a8f99313b8748ea14863e08f6ff3878d52b70ef3516ec1f" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.596556 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-updater" containerID="cri-o://c4fa65606e0bdcb6f86bd4340763a51bfe2fa0b29c86e336932e2d7b4d5a7cc7" gracePeriod=30 Jan 31 09:19:24 crc kubenswrapper[4732]: I0131 09:19:24.596623 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-auditor" containerID="cri-o://31889d60dae084e50ec608b663e4d80dd770e7914146b1993679075fbcb5c050" gracePeriod=30 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.053125 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gblbk" Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.053122 4732 scope.go:117] "RemoveContainer" containerID="432b5c5eba1b55f33733f64fa26fd00fcd2ddf085c54e19e85cffa05ca5842c9" Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061704 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="38cdf7c7cb89df8d1c3787fb632d47d09b331d20b08aa93cb17f6a88341fb337" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061740 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="5521d09393a37a16e0269ff010dbf8c3432cdb69f1bca2e5b6f057535e6adaff" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061772 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="4c4fd17a487c3ad507bed8e66df924a39a8c2f99f7e806c1497f9605883141b8" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061781 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="4ff9ae6192a8f62a8feaec1a934822bc986fef6a2a789c88bf3fe4a983cc5f91" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061792 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="dd273c25d7dc91e5954ff8d5fbc44b6eeefa5af49ff09461c024f0d29819d844" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061801 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="adf3519dd1a3dc4b8b31ed35e741ea5a2ae39875837269f96a619bf6446696ad" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061810 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="9c260d5b071de9f983a884926c74e3c7965a5566fdf129a8231af8aff9341849" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061818 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="c4fa65606e0bdcb6f86bd4340763a51bfe2fa0b29c86e336932e2d7b4d5a7cc7" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061866 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="31889d60dae084e50ec608b663e4d80dd770e7914146b1993679075fbcb5c050" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061875 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="5de79c9073a05f38da4352247b5e904a357945349a71ed92906b206964dbd2ef" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061883 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="3c0e4beb7194a2ba16fa5a8867712d4fc5df7cc2c8e674527846af12764e0327" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061891 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="14b965bc1ec24ff789b68d1102389631b9549153560571ce9d36806cdfc47521" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061899 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="59ef299c22aaa5865a8f99313b8748ea14863e08f6ff3878d52b70ef3516ec1f" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061700 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"38cdf7c7cb89df8d1c3787fb632d47d09b331d20b08aa93cb17f6a88341fb337"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061961 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"5521d09393a37a16e0269ff010dbf8c3432cdb69f1bca2e5b6f057535e6adaff"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061983 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"4c4fd17a487c3ad507bed8e66df924a39a8c2f99f7e806c1497f9605883141b8"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.061996 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"4ff9ae6192a8f62a8feaec1a934822bc986fef6a2a789c88bf3fe4a983cc5f91"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.062008 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"dd273c25d7dc91e5954ff8d5fbc44b6eeefa5af49ff09461c024f0d29819d844"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.062019 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"adf3519dd1a3dc4b8b31ed35e741ea5a2ae39875837269f96a619bf6446696ad"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.062032 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"9c260d5b071de9f983a884926c74e3c7965a5566fdf129a8231af8aff9341849"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.062044 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"c4fa65606e0bdcb6f86bd4340763a51bfe2fa0b29c86e336932e2d7b4d5a7cc7"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.062055 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"31889d60dae084e50ec608b663e4d80dd770e7914146b1993679075fbcb5c050"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.062067 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"5de79c9073a05f38da4352247b5e904a357945349a71ed92906b206964dbd2ef"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.062080 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"3c0e4beb7194a2ba16fa5a8867712d4fc5df7cc2c8e674527846af12764e0327"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.062090 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"14b965bc1ec24ff789b68d1102389631b9549153560571ce9d36806cdfc47521"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.062101 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"59ef299c22aaa5865a8f99313b8748ea14863e08f6ff3878d52b70ef3516ec1f"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.064268 4732 generic.go:334] "Generic (PLEG): container finished" podID="cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" containerID="7645a81d1554d39a7245fbb85e3b78b90ce8cbb1ff6a3161695b88d390dd384c" exitCode=0 Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.064293 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" event={"ID":"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9","Type":"ContainerDied","Data":"7645a81d1554d39a7245fbb85e3b78b90ce8cbb1ff6a3161695b88d390dd384c"} Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.685501 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.787379 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-run-httpd\") pod \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.787482 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-etc-swift\") pod \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.787542 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-log-httpd\") pod \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.787585 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4pgj\" (UniqueName: \"kubernetes.io/projected/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-kube-api-access-c4pgj\") pod \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.787736 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-config-data\") pod \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\" (UID: \"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9\") " Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.789356 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" (UID: "cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.789430 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" (UID: "cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.804647 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" (UID: "cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.804934 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-kube-api-access-c4pgj" (OuterVolumeSpecName: "kube-api-access-c4pgj") pod "cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" (UID: "cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9"). InnerVolumeSpecName "kube-api-access-c4pgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.850801 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-config-data" (OuterVolumeSpecName: "config-data") pod "cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" (UID: "cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.889454 4732 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.889499 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.889512 4732 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.889527 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4pgj\" (UniqueName: \"kubernetes.io/projected/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-kube-api-access-c4pgj\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:25 crc kubenswrapper[4732]: I0131 09:19:25.889541 4732 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.089288 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="a768c10034a64a5a20c9c6d0006c491e9b64bcedd25898cbbfa5a7d80612ad9a" exitCode=0 Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.089322 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="76755a52198a018f0a2d0f0ab371558b237a7993168d70acf02d525f01693f21" exitCode=0 Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.089364 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"a768c10034a64a5a20c9c6d0006c491e9b64bcedd25898cbbfa5a7d80612ad9a"} Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.089415 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"76755a52198a018f0a2d0f0ab371558b237a7993168d70acf02d525f01693f21"} Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.091642 4732 generic.go:334] "Generic (PLEG): container finished" podID="cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" containerID="14e28a091b47ebba4b423f5066db9e9d08dac2ef7f479ee58c476aa4be9c8628" exitCode=0 Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.091713 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" event={"ID":"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9","Type":"ContainerDied","Data":"14e28a091b47ebba4b423f5066db9e9d08dac2ef7f479ee58c476aa4be9c8628"} Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.091750 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" event={"ID":"cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9","Type":"ContainerDied","Data":"84936fd9e60dfd8bff06c859b11fa4812f19d8ba83da60cc1b9233b2901d5e51"} Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.091772 4732 scope.go:117] "RemoveContainer" containerID="14e28a091b47ebba4b423f5066db9e9d08dac2ef7f479ee58c476aa4be9c8628" Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.091982 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq" Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.109419 4732 scope.go:117] "RemoveContainer" containerID="7645a81d1554d39a7245fbb85e3b78b90ce8cbb1ff6a3161695b88d390dd384c" Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.124309 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq"] Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.130757 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-59dd5dbb7c-lcxsq"] Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.133084 4732 scope.go:117] "RemoveContainer" containerID="14e28a091b47ebba4b423f5066db9e9d08dac2ef7f479ee58c476aa4be9c8628" Jan 31 09:19:26 crc kubenswrapper[4732]: E0131 09:19:26.133540 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14e28a091b47ebba4b423f5066db9e9d08dac2ef7f479ee58c476aa4be9c8628\": container with ID starting with 14e28a091b47ebba4b423f5066db9e9d08dac2ef7f479ee58c476aa4be9c8628 not found: ID does not exist" containerID="14e28a091b47ebba4b423f5066db9e9d08dac2ef7f479ee58c476aa4be9c8628" Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.133619 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14e28a091b47ebba4b423f5066db9e9d08dac2ef7f479ee58c476aa4be9c8628"} err="failed to get container status \"14e28a091b47ebba4b423f5066db9e9d08dac2ef7f479ee58c476aa4be9c8628\": rpc error: code = NotFound desc = could not find container \"14e28a091b47ebba4b423f5066db9e9d08dac2ef7f479ee58c476aa4be9c8628\": container with ID starting with 14e28a091b47ebba4b423f5066db9e9d08dac2ef7f479ee58c476aa4be9c8628 not found: ID does not exist" Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.133682 4732 scope.go:117] "RemoveContainer" containerID="7645a81d1554d39a7245fbb85e3b78b90ce8cbb1ff6a3161695b88d390dd384c" Jan 31 09:19:26 crc kubenswrapper[4732]: E0131 09:19:26.134166 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7645a81d1554d39a7245fbb85e3b78b90ce8cbb1ff6a3161695b88d390dd384c\": container with ID starting with 7645a81d1554d39a7245fbb85e3b78b90ce8cbb1ff6a3161695b88d390dd384c not found: ID does not exist" containerID="7645a81d1554d39a7245fbb85e3b78b90ce8cbb1ff6a3161695b88d390dd384c" Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.134194 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7645a81d1554d39a7245fbb85e3b78b90ce8cbb1ff6a3161695b88d390dd384c"} err="failed to get container status \"7645a81d1554d39a7245fbb85e3b78b90ce8cbb1ff6a3161695b88d390dd384c\": rpc error: code = NotFound desc = could not find container \"7645a81d1554d39a7245fbb85e3b78b90ce8cbb1ff6a3161695b88d390dd384c\": container with ID starting with 7645a81d1554d39a7245fbb85e3b78b90ce8cbb1ff6a3161695b88d390dd384c not found: ID does not exist" Jan 31 09:19:26 crc kubenswrapper[4732]: I0131 09:19:26.554936 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" path="/var/lib/kubelet/pods/cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9/volumes" Jan 31 09:19:47 crc kubenswrapper[4732]: I0131 09:19:47.498246 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:19:47 crc kubenswrapper[4732]: I0131 09:19:47.498882 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:19:54 crc kubenswrapper[4732]: I0131 09:19:54.960192 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.127034 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-cache\") pod \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.127102 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-lock\") pod \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.127205 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift\") pod \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.127267 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5fb9\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-kube-api-access-b5fb9\") pod \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.127426 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\" (UID: \"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65\") " Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.127570 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-lock" (OuterVolumeSpecName: "lock") pod "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" (UID: "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.127965 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-cache" (OuterVolumeSpecName: "cache") pod "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" (UID: "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.127974 4732 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-lock\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.133306 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "swift") pod "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" (UID: "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.133344 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-kube-api-access-b5fb9" (OuterVolumeSpecName: "kube-api-access-b5fb9") pod "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" (UID: "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65"). InnerVolumeSpecName "kube-api-access-b5fb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.133400 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" (UID: "1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.229162 4732 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-cache\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.229212 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.229233 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5fb9\" (UniqueName: \"kubernetes.io/projected/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65-kube-api-access-b5fb9\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.229291 4732 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.258988 4732 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.330291 4732 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.340384 4732 generic.go:334] "Generic (PLEG): container finished" podID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerID="1183d753033ef14dd779291a910acae735ec0ef3a1e7104272c1b8d040389479" exitCode=137 Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.340456 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"1183d753033ef14dd779291a910acae735ec0ef3a1e7104272c1b8d040389479"} Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.340557 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65","Type":"ContainerDied","Data":"34458a4f4a5d5e0c1ee442a308b559938e3621f6dae5595249f026b6e797962d"} Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.340586 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.340594 4732 scope.go:117] "RemoveContainer" containerID="38cdf7c7cb89df8d1c3787fb632d47d09b331d20b08aa93cb17f6a88341fb337" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.369444 4732 scope.go:117] "RemoveContainer" containerID="1183d753033ef14dd779291a910acae735ec0ef3a1e7104272c1b8d040389479" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.382717 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.393721 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.401049 4732 scope.go:117] "RemoveContainer" containerID="5521d09393a37a16e0269ff010dbf8c3432cdb69f1bca2e5b6f057535e6adaff" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.416943 4732 scope.go:117] "RemoveContainer" containerID="4c4fd17a487c3ad507bed8e66df924a39a8c2f99f7e806c1497f9605883141b8" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.435508 4732 scope.go:117] "RemoveContainer" containerID="4ff9ae6192a8f62a8feaec1a934822bc986fef6a2a789c88bf3fe4a983cc5f91" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.451374 4732 scope.go:117] "RemoveContainer" containerID="dd273c25d7dc91e5954ff8d5fbc44b6eeefa5af49ff09461c024f0d29819d844" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.471065 4732 scope.go:117] "RemoveContainer" containerID="adf3519dd1a3dc4b8b31ed35e741ea5a2ae39875837269f96a619bf6446696ad" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.494049 4732 scope.go:117] "RemoveContainer" containerID="9c260d5b071de9f983a884926c74e3c7965a5566fdf129a8231af8aff9341849" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.512249 4732 scope.go:117] "RemoveContainer" containerID="c4fa65606e0bdcb6f86bd4340763a51bfe2fa0b29c86e336932e2d7b4d5a7cc7" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.531114 4732 scope.go:117] "RemoveContainer" containerID="31889d60dae084e50ec608b663e4d80dd770e7914146b1993679075fbcb5c050" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.548981 4732 scope.go:117] "RemoveContainer" containerID="5de79c9073a05f38da4352247b5e904a357945349a71ed92906b206964dbd2ef" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.567560 4732 scope.go:117] "RemoveContainer" containerID="a768c10034a64a5a20c9c6d0006c491e9b64bcedd25898cbbfa5a7d80612ad9a" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.591036 4732 scope.go:117] "RemoveContainer" containerID="3c0e4beb7194a2ba16fa5a8867712d4fc5df7cc2c8e674527846af12764e0327" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.609881 4732 scope.go:117] "RemoveContainer" containerID="14b965bc1ec24ff789b68d1102389631b9549153560571ce9d36806cdfc47521" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.624676 4732 scope.go:117] "RemoveContainer" containerID="59ef299c22aaa5865a8f99313b8748ea14863e08f6ff3878d52b70ef3516ec1f" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.641310 4732 scope.go:117] "RemoveContainer" containerID="76755a52198a018f0a2d0f0ab371558b237a7993168d70acf02d525f01693f21" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.657677 4732 scope.go:117] "RemoveContainer" containerID="38cdf7c7cb89df8d1c3787fb632d47d09b331d20b08aa93cb17f6a88341fb337" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.658293 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38cdf7c7cb89df8d1c3787fb632d47d09b331d20b08aa93cb17f6a88341fb337\": container with ID starting with 38cdf7c7cb89df8d1c3787fb632d47d09b331d20b08aa93cb17f6a88341fb337 not found: ID does not exist" containerID="38cdf7c7cb89df8d1c3787fb632d47d09b331d20b08aa93cb17f6a88341fb337" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.658342 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cdf7c7cb89df8d1c3787fb632d47d09b331d20b08aa93cb17f6a88341fb337"} err="failed to get container status \"38cdf7c7cb89df8d1c3787fb632d47d09b331d20b08aa93cb17f6a88341fb337\": rpc error: code = NotFound desc = could not find container \"38cdf7c7cb89df8d1c3787fb632d47d09b331d20b08aa93cb17f6a88341fb337\": container with ID starting with 38cdf7c7cb89df8d1c3787fb632d47d09b331d20b08aa93cb17f6a88341fb337 not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.658373 4732 scope.go:117] "RemoveContainer" containerID="1183d753033ef14dd779291a910acae735ec0ef3a1e7104272c1b8d040389479" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.658909 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1183d753033ef14dd779291a910acae735ec0ef3a1e7104272c1b8d040389479\": container with ID starting with 1183d753033ef14dd779291a910acae735ec0ef3a1e7104272c1b8d040389479 not found: ID does not exist" containerID="1183d753033ef14dd779291a910acae735ec0ef3a1e7104272c1b8d040389479" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.658938 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1183d753033ef14dd779291a910acae735ec0ef3a1e7104272c1b8d040389479"} err="failed to get container status \"1183d753033ef14dd779291a910acae735ec0ef3a1e7104272c1b8d040389479\": rpc error: code = NotFound desc = could not find container \"1183d753033ef14dd779291a910acae735ec0ef3a1e7104272c1b8d040389479\": container with ID starting with 1183d753033ef14dd779291a910acae735ec0ef3a1e7104272c1b8d040389479 not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.658956 4732 scope.go:117] "RemoveContainer" containerID="5521d09393a37a16e0269ff010dbf8c3432cdb69f1bca2e5b6f057535e6adaff" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.659295 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5521d09393a37a16e0269ff010dbf8c3432cdb69f1bca2e5b6f057535e6adaff\": container with ID starting with 5521d09393a37a16e0269ff010dbf8c3432cdb69f1bca2e5b6f057535e6adaff not found: ID does not exist" containerID="5521d09393a37a16e0269ff010dbf8c3432cdb69f1bca2e5b6f057535e6adaff" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.659323 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5521d09393a37a16e0269ff010dbf8c3432cdb69f1bca2e5b6f057535e6adaff"} err="failed to get container status \"5521d09393a37a16e0269ff010dbf8c3432cdb69f1bca2e5b6f057535e6adaff\": rpc error: code = NotFound desc = could not find container \"5521d09393a37a16e0269ff010dbf8c3432cdb69f1bca2e5b6f057535e6adaff\": container with ID starting with 5521d09393a37a16e0269ff010dbf8c3432cdb69f1bca2e5b6f057535e6adaff not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.659340 4732 scope.go:117] "RemoveContainer" containerID="4c4fd17a487c3ad507bed8e66df924a39a8c2f99f7e806c1497f9605883141b8" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.659810 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c4fd17a487c3ad507bed8e66df924a39a8c2f99f7e806c1497f9605883141b8\": container with ID starting with 4c4fd17a487c3ad507bed8e66df924a39a8c2f99f7e806c1497f9605883141b8 not found: ID does not exist" containerID="4c4fd17a487c3ad507bed8e66df924a39a8c2f99f7e806c1497f9605883141b8" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.659838 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c4fd17a487c3ad507bed8e66df924a39a8c2f99f7e806c1497f9605883141b8"} err="failed to get container status \"4c4fd17a487c3ad507bed8e66df924a39a8c2f99f7e806c1497f9605883141b8\": rpc error: code = NotFound desc = could not find container \"4c4fd17a487c3ad507bed8e66df924a39a8c2f99f7e806c1497f9605883141b8\": container with ID starting with 4c4fd17a487c3ad507bed8e66df924a39a8c2f99f7e806c1497f9605883141b8 not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.659866 4732 scope.go:117] "RemoveContainer" containerID="4ff9ae6192a8f62a8feaec1a934822bc986fef6a2a789c88bf3fe4a983cc5f91" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.660681 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ff9ae6192a8f62a8feaec1a934822bc986fef6a2a789c88bf3fe4a983cc5f91\": container with ID starting with 4ff9ae6192a8f62a8feaec1a934822bc986fef6a2a789c88bf3fe4a983cc5f91 not found: ID does not exist" containerID="4ff9ae6192a8f62a8feaec1a934822bc986fef6a2a789c88bf3fe4a983cc5f91" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.660710 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ff9ae6192a8f62a8feaec1a934822bc986fef6a2a789c88bf3fe4a983cc5f91"} err="failed to get container status \"4ff9ae6192a8f62a8feaec1a934822bc986fef6a2a789c88bf3fe4a983cc5f91\": rpc error: code = NotFound desc = could not find container \"4ff9ae6192a8f62a8feaec1a934822bc986fef6a2a789c88bf3fe4a983cc5f91\": container with ID starting with 4ff9ae6192a8f62a8feaec1a934822bc986fef6a2a789c88bf3fe4a983cc5f91 not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.660733 4732 scope.go:117] "RemoveContainer" containerID="dd273c25d7dc91e5954ff8d5fbc44b6eeefa5af49ff09461c024f0d29819d844" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.661173 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd273c25d7dc91e5954ff8d5fbc44b6eeefa5af49ff09461c024f0d29819d844\": container with ID starting with dd273c25d7dc91e5954ff8d5fbc44b6eeefa5af49ff09461c024f0d29819d844 not found: ID does not exist" containerID="dd273c25d7dc91e5954ff8d5fbc44b6eeefa5af49ff09461c024f0d29819d844" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.661203 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd273c25d7dc91e5954ff8d5fbc44b6eeefa5af49ff09461c024f0d29819d844"} err="failed to get container status \"dd273c25d7dc91e5954ff8d5fbc44b6eeefa5af49ff09461c024f0d29819d844\": rpc error: code = NotFound desc = could not find container \"dd273c25d7dc91e5954ff8d5fbc44b6eeefa5af49ff09461c024f0d29819d844\": container with ID starting with dd273c25d7dc91e5954ff8d5fbc44b6eeefa5af49ff09461c024f0d29819d844 not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.661222 4732 scope.go:117] "RemoveContainer" containerID="adf3519dd1a3dc4b8b31ed35e741ea5a2ae39875837269f96a619bf6446696ad" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.661536 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adf3519dd1a3dc4b8b31ed35e741ea5a2ae39875837269f96a619bf6446696ad\": container with ID starting with adf3519dd1a3dc4b8b31ed35e741ea5a2ae39875837269f96a619bf6446696ad not found: ID does not exist" containerID="adf3519dd1a3dc4b8b31ed35e741ea5a2ae39875837269f96a619bf6446696ad" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.661589 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adf3519dd1a3dc4b8b31ed35e741ea5a2ae39875837269f96a619bf6446696ad"} err="failed to get container status \"adf3519dd1a3dc4b8b31ed35e741ea5a2ae39875837269f96a619bf6446696ad\": rpc error: code = NotFound desc = could not find container \"adf3519dd1a3dc4b8b31ed35e741ea5a2ae39875837269f96a619bf6446696ad\": container with ID starting with adf3519dd1a3dc4b8b31ed35e741ea5a2ae39875837269f96a619bf6446696ad not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.661623 4732 scope.go:117] "RemoveContainer" containerID="9c260d5b071de9f983a884926c74e3c7965a5566fdf129a8231af8aff9341849" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.661945 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c260d5b071de9f983a884926c74e3c7965a5566fdf129a8231af8aff9341849\": container with ID starting with 9c260d5b071de9f983a884926c74e3c7965a5566fdf129a8231af8aff9341849 not found: ID does not exist" containerID="9c260d5b071de9f983a884926c74e3c7965a5566fdf129a8231af8aff9341849" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.661980 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c260d5b071de9f983a884926c74e3c7965a5566fdf129a8231af8aff9341849"} err="failed to get container status \"9c260d5b071de9f983a884926c74e3c7965a5566fdf129a8231af8aff9341849\": rpc error: code = NotFound desc = could not find container \"9c260d5b071de9f983a884926c74e3c7965a5566fdf129a8231af8aff9341849\": container with ID starting with 9c260d5b071de9f983a884926c74e3c7965a5566fdf129a8231af8aff9341849 not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.662015 4732 scope.go:117] "RemoveContainer" containerID="c4fa65606e0bdcb6f86bd4340763a51bfe2fa0b29c86e336932e2d7b4d5a7cc7" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.662334 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4fa65606e0bdcb6f86bd4340763a51bfe2fa0b29c86e336932e2d7b4d5a7cc7\": container with ID starting with c4fa65606e0bdcb6f86bd4340763a51bfe2fa0b29c86e336932e2d7b4d5a7cc7 not found: ID does not exist" containerID="c4fa65606e0bdcb6f86bd4340763a51bfe2fa0b29c86e336932e2d7b4d5a7cc7" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.662386 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4fa65606e0bdcb6f86bd4340763a51bfe2fa0b29c86e336932e2d7b4d5a7cc7"} err="failed to get container status \"c4fa65606e0bdcb6f86bd4340763a51bfe2fa0b29c86e336932e2d7b4d5a7cc7\": rpc error: code = NotFound desc = could not find container \"c4fa65606e0bdcb6f86bd4340763a51bfe2fa0b29c86e336932e2d7b4d5a7cc7\": container with ID starting with c4fa65606e0bdcb6f86bd4340763a51bfe2fa0b29c86e336932e2d7b4d5a7cc7 not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.662417 4732 scope.go:117] "RemoveContainer" containerID="31889d60dae084e50ec608b663e4d80dd770e7914146b1993679075fbcb5c050" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.662806 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31889d60dae084e50ec608b663e4d80dd770e7914146b1993679075fbcb5c050\": container with ID starting with 31889d60dae084e50ec608b663e4d80dd770e7914146b1993679075fbcb5c050 not found: ID does not exist" containerID="31889d60dae084e50ec608b663e4d80dd770e7914146b1993679075fbcb5c050" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.662838 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31889d60dae084e50ec608b663e4d80dd770e7914146b1993679075fbcb5c050"} err="failed to get container status \"31889d60dae084e50ec608b663e4d80dd770e7914146b1993679075fbcb5c050\": rpc error: code = NotFound desc = could not find container \"31889d60dae084e50ec608b663e4d80dd770e7914146b1993679075fbcb5c050\": container with ID starting with 31889d60dae084e50ec608b663e4d80dd770e7914146b1993679075fbcb5c050 not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.662858 4732 scope.go:117] "RemoveContainer" containerID="5de79c9073a05f38da4352247b5e904a357945349a71ed92906b206964dbd2ef" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.663100 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de79c9073a05f38da4352247b5e904a357945349a71ed92906b206964dbd2ef\": container with ID starting with 5de79c9073a05f38da4352247b5e904a357945349a71ed92906b206964dbd2ef not found: ID does not exist" containerID="5de79c9073a05f38da4352247b5e904a357945349a71ed92906b206964dbd2ef" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.663128 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de79c9073a05f38da4352247b5e904a357945349a71ed92906b206964dbd2ef"} err="failed to get container status \"5de79c9073a05f38da4352247b5e904a357945349a71ed92906b206964dbd2ef\": rpc error: code = NotFound desc = could not find container \"5de79c9073a05f38da4352247b5e904a357945349a71ed92906b206964dbd2ef\": container with ID starting with 5de79c9073a05f38da4352247b5e904a357945349a71ed92906b206964dbd2ef not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.663147 4732 scope.go:117] "RemoveContainer" containerID="a768c10034a64a5a20c9c6d0006c491e9b64bcedd25898cbbfa5a7d80612ad9a" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.663392 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a768c10034a64a5a20c9c6d0006c491e9b64bcedd25898cbbfa5a7d80612ad9a\": container with ID starting with a768c10034a64a5a20c9c6d0006c491e9b64bcedd25898cbbfa5a7d80612ad9a not found: ID does not exist" containerID="a768c10034a64a5a20c9c6d0006c491e9b64bcedd25898cbbfa5a7d80612ad9a" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.663453 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a768c10034a64a5a20c9c6d0006c491e9b64bcedd25898cbbfa5a7d80612ad9a"} err="failed to get container status \"a768c10034a64a5a20c9c6d0006c491e9b64bcedd25898cbbfa5a7d80612ad9a\": rpc error: code = NotFound desc = could not find container \"a768c10034a64a5a20c9c6d0006c491e9b64bcedd25898cbbfa5a7d80612ad9a\": container with ID starting with a768c10034a64a5a20c9c6d0006c491e9b64bcedd25898cbbfa5a7d80612ad9a not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.663470 4732 scope.go:117] "RemoveContainer" containerID="3c0e4beb7194a2ba16fa5a8867712d4fc5df7cc2c8e674527846af12764e0327" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.663772 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0e4beb7194a2ba16fa5a8867712d4fc5df7cc2c8e674527846af12764e0327\": container with ID starting with 3c0e4beb7194a2ba16fa5a8867712d4fc5df7cc2c8e674527846af12764e0327 not found: ID does not exist" containerID="3c0e4beb7194a2ba16fa5a8867712d4fc5df7cc2c8e674527846af12764e0327" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.663801 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0e4beb7194a2ba16fa5a8867712d4fc5df7cc2c8e674527846af12764e0327"} err="failed to get container status \"3c0e4beb7194a2ba16fa5a8867712d4fc5df7cc2c8e674527846af12764e0327\": rpc error: code = NotFound desc = could not find container \"3c0e4beb7194a2ba16fa5a8867712d4fc5df7cc2c8e674527846af12764e0327\": container with ID starting with 3c0e4beb7194a2ba16fa5a8867712d4fc5df7cc2c8e674527846af12764e0327 not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.663818 4732 scope.go:117] "RemoveContainer" containerID="14b965bc1ec24ff789b68d1102389631b9549153560571ce9d36806cdfc47521" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.664115 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14b965bc1ec24ff789b68d1102389631b9549153560571ce9d36806cdfc47521\": container with ID starting with 14b965bc1ec24ff789b68d1102389631b9549153560571ce9d36806cdfc47521 not found: ID does not exist" containerID="14b965bc1ec24ff789b68d1102389631b9549153560571ce9d36806cdfc47521" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.664144 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14b965bc1ec24ff789b68d1102389631b9549153560571ce9d36806cdfc47521"} err="failed to get container status \"14b965bc1ec24ff789b68d1102389631b9549153560571ce9d36806cdfc47521\": rpc error: code = NotFound desc = could not find container \"14b965bc1ec24ff789b68d1102389631b9549153560571ce9d36806cdfc47521\": container with ID starting with 14b965bc1ec24ff789b68d1102389631b9549153560571ce9d36806cdfc47521 not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.664160 4732 scope.go:117] "RemoveContainer" containerID="59ef299c22aaa5865a8f99313b8748ea14863e08f6ff3878d52b70ef3516ec1f" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.664522 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59ef299c22aaa5865a8f99313b8748ea14863e08f6ff3878d52b70ef3516ec1f\": container with ID starting with 59ef299c22aaa5865a8f99313b8748ea14863e08f6ff3878d52b70ef3516ec1f not found: ID does not exist" containerID="59ef299c22aaa5865a8f99313b8748ea14863e08f6ff3878d52b70ef3516ec1f" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.664548 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ef299c22aaa5865a8f99313b8748ea14863e08f6ff3878d52b70ef3516ec1f"} err="failed to get container status \"59ef299c22aaa5865a8f99313b8748ea14863e08f6ff3878d52b70ef3516ec1f\": rpc error: code = NotFound desc = could not find container \"59ef299c22aaa5865a8f99313b8748ea14863e08f6ff3878d52b70ef3516ec1f\": container with ID starting with 59ef299c22aaa5865a8f99313b8748ea14863e08f6ff3878d52b70ef3516ec1f not found: ID does not exist" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.664565 4732 scope.go:117] "RemoveContainer" containerID="76755a52198a018f0a2d0f0ab371558b237a7993168d70acf02d525f01693f21" Jan 31 09:19:55 crc kubenswrapper[4732]: E0131 09:19:55.664832 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76755a52198a018f0a2d0f0ab371558b237a7993168d70acf02d525f01693f21\": container with ID starting with 76755a52198a018f0a2d0f0ab371558b237a7993168d70acf02d525f01693f21 not found: ID does not exist" containerID="76755a52198a018f0a2d0f0ab371558b237a7993168d70acf02d525f01693f21" Jan 31 09:19:55 crc kubenswrapper[4732]: I0131 09:19:55.664860 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76755a52198a018f0a2d0f0ab371558b237a7993168d70acf02d525f01693f21"} err="failed to get container status \"76755a52198a018f0a2d0f0ab371558b237a7993168d70acf02d525f01693f21\": rpc error: code = NotFound desc = could not find container \"76755a52198a018f0a2d0f0ab371558b237a7993168d70acf02d525f01693f21\": container with ID starting with 76755a52198a018f0a2d0f0ab371558b237a7993168d70acf02d525f01693f21 not found: ID does not exist" Jan 31 09:19:56 crc kubenswrapper[4732]: I0131 09:19:56.550589 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" path="/var/lib/kubelet/pods/1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65/volumes" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.370615 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq"] Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.370899 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" containerName="proxy-server" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.370914 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" containerName="proxy-server" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.370925 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fc93ea-e490-4d30-a742-97b668a286c5" containerName="swift-ring-rebalance" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.370933 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fc93ea-e490-4d30-a742-97b668a286c5" containerName="swift-ring-rebalance" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.370944 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-auditor" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.370950 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-auditor" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.370964 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-reaper" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.370969 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-reaper" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.370983 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-replicator" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.370989 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-replicator" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.370999 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-server" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371005 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-server" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.371013 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="rsync" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371019 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="rsync" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.371029 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-expirer" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371034 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-expirer" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.371045 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-server" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371050 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-server" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.371057 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-auditor" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371063 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-auditor" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.371070 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-updater" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371076 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-updater" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.371084 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-server" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371090 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-server" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.371097 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-updater" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371102 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-updater" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.371111 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-auditor" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371118 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-auditor" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.371126 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" containerName="proxy-httpd" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371132 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" containerName="proxy-httpd" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.371139 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="swift-recon-cron" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371144 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="swift-recon-cron" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.371154 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-replicator" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371159 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-replicator" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.371168 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-replicator" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371173 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-replicator" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.371180 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-sharder" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371185 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-sharder" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371288 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-replicator" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371299 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-server" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371309 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-updater" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371318 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-reaper" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371324 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-replicator" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371332 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="rsync" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371339 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-server" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371347 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-replicator" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371354 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-updater" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371368 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="swift-recon-cron" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371383 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-sharder" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371395 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-auditor" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371405 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fc93ea-e490-4d30-a742-97b668a286c5" containerName="swift-ring-rebalance" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371413 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="account-auditor" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371423 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="object-expirer" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371431 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" containerName="proxy-server" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371440 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-server" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371446 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe66c2d-70ee-48c8-927c-4b1ba8d5cb65" containerName="container-auditor" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.371452 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea16bcf-b0b0-4d2a-ad37-ee127b9a1fd9" containerName="proxy-httpd" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.372156 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.375285 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.375384 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.375603 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-nghpf" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.376286 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.435098 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq"] Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.448729 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.453431 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.456060 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.459139 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.459279 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb5e63b-882d-4388-abb1-130923832c9f-config-data\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.459332 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cb5e63b-882d-4388-abb1-130923832c9f-log-httpd\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.459377 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cb5e63b-882d-4388-abb1-130923832c9f-run-httpd\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.459504 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n98z\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-kube-api-access-5n98z\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.477818 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.495752 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.495987 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.500735 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.500771 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.500893 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.501065 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.560954 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-lock\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.561023 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb5e63b-882d-4388-abb1-130923832c9f-config-data\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.561050 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cb5e63b-882d-4388-abb1-130923832c9f-log-httpd\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.561083 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zdfg\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-kube-api-access-9zdfg\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.561125 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cb5e63b-882d-4388-abb1-130923832c9f-run-httpd\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.561146 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.561195 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n98z\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-kube-api-access-5n98z\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.561252 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.561281 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.561311 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-cache\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.562580 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cb5e63b-882d-4388-abb1-130923832c9f-log-httpd\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.562652 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cb5e63b-882d-4388-abb1-130923832c9f-run-httpd\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.562694 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.562711 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq: configmap "swift-ring-files" not found Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.562761 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift podName:8cb5e63b-882d-4388-abb1-130923832c9f nodeName:}" failed. No retries permitted until 2026-01-31 09:19:58.062739615 +0000 UTC m=+1136.368615919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift") pod "swift-proxy-7d8cf99555-lvwdq" (UID: "8cb5e63b-882d-4388-abb1-130923832c9f") : configmap "swift-ring-files" not found Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.566410 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb5e63b-882d-4388-abb1-130923832c9f-config-data\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.584791 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n98z\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-kube-api-access-5n98z\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.662672 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.662952 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eb04e24b-fc92-4f2e-abcb-fa46706f699a-lock\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.663064 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/18b68f5e-a1b4-4f52-9a4e-5967735ec105-cache\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.663293 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.663400 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.663476 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/18b68f5e-a1b4-4f52-9a4e-5967735ec105-lock\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.662861 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.663594 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.663657 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.663720 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift podName:ea3117d7-0038-4ca5-bee5-ae76db9a12eb nodeName:}" failed. No retries permitted until 2026-01-31 09:19:58.163698832 +0000 UTC m=+1136.469575036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift") pod "swift-storage-0" (UID: "ea3117d7-0038-4ca5-bee5-ae76db9a12eb") : configmap "swift-ring-files" not found Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.663837 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqbp6\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-kube-api-access-pqbp6\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.663918 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-cache\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.663995 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txhcm\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-kube-api-access-txhcm\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.663795 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") device mount path \"/mnt/openstack/pv08\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.664131 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eb04e24b-fc92-4f2e-abcb-fa46706f699a-cache\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.664245 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.664302 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-cache\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.664378 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-lock\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.664452 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.664537 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zdfg\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-kube-api-access-9zdfg\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.664783 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-lock\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.683951 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.684565 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zdfg\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-kube-api-access-9zdfg\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.765733 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.765790 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqbp6\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-kube-api-access-pqbp6\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.765825 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txhcm\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-kube-api-access-txhcm\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.765864 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eb04e24b-fc92-4f2e-abcb-fa46706f699a-cache\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.765899 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.765927 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.765945 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") device mount path \"/mnt/openstack/pv06\"" pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.765985 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eb04e24b-fc92-4f2e-abcb-fa46706f699a-lock\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.766394 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eb04e24b-fc92-4f2e-abcb-fa46706f699a-cache\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.766641 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.766684 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.766720 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/18b68f5e-a1b4-4f52-9a4e-5967735ec105-cache\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.766737 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift podName:18b68f5e-a1b4-4f52-9a4e-5967735ec105 nodeName:}" failed. No retries permitted until 2026-01-31 09:19:58.266717982 +0000 UTC m=+1136.572594196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift") pod "swift-storage-2" (UID: "18b68f5e-a1b4-4f52-9a4e-5967735ec105") : configmap "swift-ring-files" not found Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.766765 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.766856 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/18b68f5e-a1b4-4f52-9a4e-5967735ec105-lock\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.766875 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eb04e24b-fc92-4f2e-abcb-fa46706f699a-lock\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.767010 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") device mount path \"/mnt/openstack/pv11\"" pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.767159 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.767244 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.767181 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/18b68f5e-a1b4-4f52-9a4e-5967735ec105-cache\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.767364 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/18b68f5e-a1b4-4f52-9a4e-5967735ec105-lock\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: E0131 09:19:57.767452 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift podName:eb04e24b-fc92-4f2e-abcb-fa46706f699a nodeName:}" failed. No retries permitted until 2026-01-31 09:19:58.267384783 +0000 UTC m=+1136.573260987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift") pod "swift-storage-1" (UID: "eb04e24b-fc92-4f2e-abcb-fa46706f699a") : configmap "swift-ring-files" not found Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.787356 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.787622 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.790339 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txhcm\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-kube-api-access-txhcm\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:57 crc kubenswrapper[4732]: I0131 09:19:57.792239 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqbp6\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-kube-api-access-pqbp6\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:58 crc kubenswrapper[4732]: I0131 09:19:58.070944 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:58 crc kubenswrapper[4732]: E0131 09:19:58.071110 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:19:58 crc kubenswrapper[4732]: E0131 09:19:58.071127 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq: configmap "swift-ring-files" not found Jan 31 09:19:58 crc kubenswrapper[4732]: E0131 09:19:58.071177 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift podName:8cb5e63b-882d-4388-abb1-130923832c9f nodeName:}" failed. No retries permitted until 2026-01-31 09:19:59.07116293 +0000 UTC m=+1137.377039134 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift") pod "swift-proxy-7d8cf99555-lvwdq" (UID: "8cb5e63b-882d-4388-abb1-130923832c9f") : configmap "swift-ring-files" not found Jan 31 09:19:58 crc kubenswrapper[4732]: I0131 09:19:58.172357 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:58 crc kubenswrapper[4732]: E0131 09:19:58.172616 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:19:58 crc kubenswrapper[4732]: E0131 09:19:58.172681 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:19:58 crc kubenswrapper[4732]: E0131 09:19:58.172756 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift podName:ea3117d7-0038-4ca5-bee5-ae76db9a12eb nodeName:}" failed. No retries permitted until 2026-01-31 09:19:59.172733475 +0000 UTC m=+1137.478609749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift") pod "swift-storage-0" (UID: "ea3117d7-0038-4ca5-bee5-ae76db9a12eb") : configmap "swift-ring-files" not found Jan 31 09:19:58 crc kubenswrapper[4732]: I0131 09:19:58.274456 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:58 crc kubenswrapper[4732]: I0131 09:19:58.274610 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:58 crc kubenswrapper[4732]: E0131 09:19:58.274876 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:19:58 crc kubenswrapper[4732]: E0131 09:19:58.274901 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 09:19:58 crc kubenswrapper[4732]: E0131 09:19:58.274966 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift podName:eb04e24b-fc92-4f2e-abcb-fa46706f699a nodeName:}" failed. No retries permitted until 2026-01-31 09:19:59.274945081 +0000 UTC m=+1137.580821315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift") pod "swift-storage-1" (UID: "eb04e24b-fc92-4f2e-abcb-fa46706f699a") : configmap "swift-ring-files" not found Jan 31 09:19:58 crc kubenswrapper[4732]: E0131 09:19:58.275132 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:19:58 crc kubenswrapper[4732]: E0131 09:19:58.275160 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 09:19:58 crc kubenswrapper[4732]: E0131 09:19:58.275220 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift podName:18b68f5e-a1b4-4f52-9a4e-5967735ec105 nodeName:}" failed. No retries permitted until 2026-01-31 09:19:59.275202149 +0000 UTC m=+1137.581078353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift") pod "swift-storage-2" (UID: "18b68f5e-a1b4-4f52-9a4e-5967735ec105") : configmap "swift-ring-files" not found Jan 31 09:19:59 crc kubenswrapper[4732]: I0131 09:19:59.087765 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:19:59 crc kubenswrapper[4732]: E0131 09:19:59.088046 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:19:59 crc kubenswrapper[4732]: E0131 09:19:59.088097 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq: configmap "swift-ring-files" not found Jan 31 09:19:59 crc kubenswrapper[4732]: E0131 09:19:59.088235 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift podName:8cb5e63b-882d-4388-abb1-130923832c9f nodeName:}" failed. No retries permitted until 2026-01-31 09:20:01.088196184 +0000 UTC m=+1139.394072438 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift") pod "swift-proxy-7d8cf99555-lvwdq" (UID: "8cb5e63b-882d-4388-abb1-130923832c9f") : configmap "swift-ring-files" not found Jan 31 09:19:59 crc kubenswrapper[4732]: I0131 09:19:59.189909 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:19:59 crc kubenswrapper[4732]: E0131 09:19:59.190141 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:19:59 crc kubenswrapper[4732]: E0131 09:19:59.190200 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:19:59 crc kubenswrapper[4732]: E0131 09:19:59.190264 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift podName:ea3117d7-0038-4ca5-bee5-ae76db9a12eb nodeName:}" failed. No retries permitted until 2026-01-31 09:20:01.190240535 +0000 UTC m=+1139.496116739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift") pod "swift-storage-0" (UID: "ea3117d7-0038-4ca5-bee5-ae76db9a12eb") : configmap "swift-ring-files" not found Jan 31 09:19:59 crc kubenswrapper[4732]: I0131 09:19:59.292048 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:19:59 crc kubenswrapper[4732]: I0131 09:19:59.292191 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:19:59 crc kubenswrapper[4732]: E0131 09:19:59.292328 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:19:59 crc kubenswrapper[4732]: E0131 09:19:59.292360 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 09:19:59 crc kubenswrapper[4732]: E0131 09:19:59.292420 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift podName:eb04e24b-fc92-4f2e-abcb-fa46706f699a nodeName:}" failed. No retries permitted until 2026-01-31 09:20:01.292395728 +0000 UTC m=+1139.598272012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift") pod "swift-storage-1" (UID: "eb04e24b-fc92-4f2e-abcb-fa46706f699a") : configmap "swift-ring-files" not found Jan 31 09:19:59 crc kubenswrapper[4732]: E0131 09:19:59.294763 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:19:59 crc kubenswrapper[4732]: E0131 09:19:59.294812 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 09:19:59 crc kubenswrapper[4732]: E0131 09:19:59.300548 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift podName:18b68f5e-a1b4-4f52-9a4e-5967735ec105 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:01.294864494 +0000 UTC m=+1139.600740798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift") pod "swift-storage-2" (UID: "18b68f5e-a1b4-4f52-9a4e-5967735ec105") : configmap "swift-ring-files" not found Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.121532 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:20:01 crc kubenswrapper[4732]: E0131 09:20:01.121807 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:20:01 crc kubenswrapper[4732]: E0131 09:20:01.122032 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq: configmap "swift-ring-files" not found Jan 31 09:20:01 crc kubenswrapper[4732]: E0131 09:20:01.122139 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift podName:8cb5e63b-882d-4388-abb1-130923832c9f nodeName:}" failed. No retries permitted until 2026-01-31 09:20:05.122100688 +0000 UTC m=+1143.427976942 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift") pod "swift-proxy-7d8cf99555-lvwdq" (UID: "8cb5e63b-882d-4388-abb1-130923832c9f") : configmap "swift-ring-files" not found Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.223587 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:20:01 crc kubenswrapper[4732]: E0131 09:20:01.223891 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:20:01 crc kubenswrapper[4732]: E0131 09:20:01.223944 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:20:01 crc kubenswrapper[4732]: E0131 09:20:01.224049 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift podName:ea3117d7-0038-4ca5-bee5-ae76db9a12eb nodeName:}" failed. No retries permitted until 2026-01-31 09:20:05.224002643 +0000 UTC m=+1143.529878877 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift") pod "swift-storage-0" (UID: "ea3117d7-0038-4ca5-bee5-ae76db9a12eb") : configmap "swift-ring-files" not found Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.265659 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-7q9kp"] Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.267763 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.270949 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.271153 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.280462 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-7q9kp"] Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.325691 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.325894 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:20:01 crc kubenswrapper[4732]: E0131 09:20:01.325939 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:20:01 crc kubenswrapper[4732]: E0131 09:20:01.326067 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 09:20:01 crc kubenswrapper[4732]: E0131 09:20:01.326120 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift podName:eb04e24b-fc92-4f2e-abcb-fa46706f699a nodeName:}" failed. No retries permitted until 2026-01-31 09:20:05.326102886 +0000 UTC m=+1143.631979090 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift") pod "swift-storage-1" (UID: "eb04e24b-fc92-4f2e-abcb-fa46706f699a") : configmap "swift-ring-files" not found Jan 31 09:20:01 crc kubenswrapper[4732]: E0131 09:20:01.326022 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:20:01 crc kubenswrapper[4732]: E0131 09:20:01.326140 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 09:20:01 crc kubenswrapper[4732]: E0131 09:20:01.326155 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift podName:18b68f5e-a1b4-4f52-9a4e-5967735ec105 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:05.326149977 +0000 UTC m=+1143.632026181 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift") pod "swift-storage-2" (UID: "18b68f5e-a1b4-4f52-9a4e-5967735ec105") : configmap "swift-ring-files" not found Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.426956 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e25bc49e-1bbe-4103-b751-fee5d86e7a92-scripts\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.427014 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e25bc49e-1bbe-4103-b751-fee5d86e7a92-swiftconf\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.427158 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e25bc49e-1bbe-4103-b751-fee5d86e7a92-etc-swift\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.427371 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e25bc49e-1bbe-4103-b751-fee5d86e7a92-ring-data-devices\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.427433 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54c9x\" (UniqueName: \"kubernetes.io/projected/e25bc49e-1bbe-4103-b751-fee5d86e7a92-kube-api-access-54c9x\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.427585 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e25bc49e-1bbe-4103-b751-fee5d86e7a92-dispersionconf\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.528808 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e25bc49e-1bbe-4103-b751-fee5d86e7a92-ring-data-devices\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.528870 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54c9x\" (UniqueName: \"kubernetes.io/projected/e25bc49e-1bbe-4103-b751-fee5d86e7a92-kube-api-access-54c9x\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.528932 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e25bc49e-1bbe-4103-b751-fee5d86e7a92-dispersionconf\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.529049 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e25bc49e-1bbe-4103-b751-fee5d86e7a92-scripts\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.529103 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e25bc49e-1bbe-4103-b751-fee5d86e7a92-swiftconf\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.529142 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e25bc49e-1bbe-4103-b751-fee5d86e7a92-etc-swift\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.529614 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e25bc49e-1bbe-4103-b751-fee5d86e7a92-etc-swift\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.530145 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e25bc49e-1bbe-4103-b751-fee5d86e7a92-scripts\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.530337 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e25bc49e-1bbe-4103-b751-fee5d86e7a92-ring-data-devices\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.544370 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e25bc49e-1bbe-4103-b751-fee5d86e7a92-dispersionconf\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.547007 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e25bc49e-1bbe-4103-b751-fee5d86e7a92-swiftconf\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.558941 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54c9x\" (UniqueName: \"kubernetes.io/projected/e25bc49e-1bbe-4103-b751-fee5d86e7a92-kube-api-access-54c9x\") pod \"swift-ring-rebalance-7q9kp\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:01 crc kubenswrapper[4732]: I0131 09:20:01.589327 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:02 crc kubenswrapper[4732]: I0131 09:20:02.040498 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-7q9kp"] Jan 31 09:20:02 crc kubenswrapper[4732]: I0131 09:20:02.395122 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" event={"ID":"e25bc49e-1bbe-4103-b751-fee5d86e7a92","Type":"ContainerStarted","Data":"2bf5f760920241f10f11d255f66394a5366cad9af6f5cf262601f44403aa7939"} Jan 31 09:20:02 crc kubenswrapper[4732]: I0131 09:20:02.395186 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" event={"ID":"e25bc49e-1bbe-4103-b751-fee5d86e7a92","Type":"ContainerStarted","Data":"2e9feb5a7bd4dc52e920699e4073a986cd013aa621986002fa31f113c2896401"} Jan 31 09:20:02 crc kubenswrapper[4732]: I0131 09:20:02.414807 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" podStartSLOduration=1.414792119 podStartE2EDuration="1.414792119s" podCreationTimestamp="2026-01-31 09:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:20:02.410463885 +0000 UTC m=+1140.716340139" watchObservedRunningTime="2026-01-31 09:20:02.414792119 +0000 UTC m=+1140.720668323" Jan 31 09:20:05 crc kubenswrapper[4732]: I0131 09:20:05.208481 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:20:05 crc kubenswrapper[4732]: E0131 09:20:05.208753 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:20:05 crc kubenswrapper[4732]: E0131 09:20:05.209129 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq: configmap "swift-ring-files" not found Jan 31 09:20:05 crc kubenswrapper[4732]: E0131 09:20:05.209221 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift podName:8cb5e63b-882d-4388-abb1-130923832c9f nodeName:}" failed. No retries permitted until 2026-01-31 09:20:13.209194184 +0000 UTC m=+1151.515070388 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift") pod "swift-proxy-7d8cf99555-lvwdq" (UID: "8cb5e63b-882d-4388-abb1-130923832c9f") : configmap "swift-ring-files" not found Jan 31 09:20:05 crc kubenswrapper[4732]: I0131 09:20:05.310477 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:20:05 crc kubenswrapper[4732]: E0131 09:20:05.310815 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:20:05 crc kubenswrapper[4732]: E0131 09:20:05.310923 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:20:05 crc kubenswrapper[4732]: E0131 09:20:05.310989 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift podName:ea3117d7-0038-4ca5-bee5-ae76db9a12eb nodeName:}" failed. No retries permitted until 2026-01-31 09:20:13.310970646 +0000 UTC m=+1151.616846850 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift") pod "swift-storage-0" (UID: "ea3117d7-0038-4ca5-bee5-ae76db9a12eb") : configmap "swift-ring-files" not found Jan 31 09:20:05 crc kubenswrapper[4732]: I0131 09:20:05.413018 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:20:05 crc kubenswrapper[4732]: E0131 09:20:05.413269 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:20:05 crc kubenswrapper[4732]: I0131 09:20:05.413293 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:20:05 crc kubenswrapper[4732]: E0131 09:20:05.413299 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 09:20:05 crc kubenswrapper[4732]: E0131 09:20:05.413509 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift podName:eb04e24b-fc92-4f2e-abcb-fa46706f699a nodeName:}" failed. No retries permitted until 2026-01-31 09:20:13.41347767 +0000 UTC m=+1151.719353914 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift") pod "swift-storage-1" (UID: "eb04e24b-fc92-4f2e-abcb-fa46706f699a") : configmap "swift-ring-files" not found Jan 31 09:20:05 crc kubenswrapper[4732]: E0131 09:20:05.413372 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:20:05 crc kubenswrapper[4732]: E0131 09:20:05.413547 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 09:20:05 crc kubenswrapper[4732]: E0131 09:20:05.413617 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift podName:18b68f5e-a1b4-4f52-9a4e-5967735ec105 nodeName:}" failed. No retries permitted until 2026-01-31 09:20:13.413592404 +0000 UTC m=+1151.719468728 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift") pod "swift-storage-2" (UID: "18b68f5e-a1b4-4f52-9a4e-5967735ec105") : configmap "swift-ring-files" not found Jan 31 09:20:12 crc kubenswrapper[4732]: I0131 09:20:12.471356 4732 generic.go:334] "Generic (PLEG): container finished" podID="e25bc49e-1bbe-4103-b751-fee5d86e7a92" containerID="2bf5f760920241f10f11d255f66394a5366cad9af6f5cf262601f44403aa7939" exitCode=0 Jan 31 09:20:12 crc kubenswrapper[4732]: I0131 09:20:12.471408 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" event={"ID":"e25bc49e-1bbe-4103-b751-fee5d86e7a92","Type":"ContainerDied","Data":"2bf5f760920241f10f11d255f66394a5366cad9af6f5cf262601f44403aa7939"} Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.255930 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.265798 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift\") pod \"swift-proxy-7d8cf99555-lvwdq\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.292517 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.357353 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.362482 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift\") pod \"swift-storage-0\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.373353 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.459048 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.459193 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.467922 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift\") pod \"swift-storage-2\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.468196 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift\") pod \"swift-storage-1\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.717755 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.731209 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.742021 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq"] Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.754183 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.858868 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.864545 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e25bc49e-1bbe-4103-b751-fee5d86e7a92-swiftconf\") pod \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.864649 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e25bc49e-1bbe-4103-b751-fee5d86e7a92-ring-data-devices\") pod \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.864723 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e25bc49e-1bbe-4103-b751-fee5d86e7a92-scripts\") pod \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.864796 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e25bc49e-1bbe-4103-b751-fee5d86e7a92-etc-swift\") pod \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.864837 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e25bc49e-1bbe-4103-b751-fee5d86e7a92-dispersionconf\") pod \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.864866 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54c9x\" (UniqueName: \"kubernetes.io/projected/e25bc49e-1bbe-4103-b751-fee5d86e7a92-kube-api-access-54c9x\") pod \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\" (UID: \"e25bc49e-1bbe-4103-b751-fee5d86e7a92\") " Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.865428 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e25bc49e-1bbe-4103-b751-fee5d86e7a92-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e25bc49e-1bbe-4103-b751-fee5d86e7a92" (UID: "e25bc49e-1bbe-4103-b751-fee5d86e7a92"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.865992 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e25bc49e-1bbe-4103-b751-fee5d86e7a92-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e25bc49e-1bbe-4103-b751-fee5d86e7a92" (UID: "e25bc49e-1bbe-4103-b751-fee5d86e7a92"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.880517 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e25bc49e-1bbe-4103-b751-fee5d86e7a92-kube-api-access-54c9x" (OuterVolumeSpecName: "kube-api-access-54c9x") pod "e25bc49e-1bbe-4103-b751-fee5d86e7a92" (UID: "e25bc49e-1bbe-4103-b751-fee5d86e7a92"). InnerVolumeSpecName "kube-api-access-54c9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.905449 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e25bc49e-1bbe-4103-b751-fee5d86e7a92-scripts" (OuterVolumeSpecName: "scripts") pod "e25bc49e-1bbe-4103-b751-fee5d86e7a92" (UID: "e25bc49e-1bbe-4103-b751-fee5d86e7a92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.915892 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25bc49e-1bbe-4103-b751-fee5d86e7a92-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e25bc49e-1bbe-4103-b751-fee5d86e7a92" (UID: "e25bc49e-1bbe-4103-b751-fee5d86e7a92"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.917166 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25bc49e-1bbe-4103-b751-fee5d86e7a92-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e25bc49e-1bbe-4103-b751-fee5d86e7a92" (UID: "e25bc49e-1bbe-4103-b751-fee5d86e7a92"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.966622 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e25bc49e-1bbe-4103-b751-fee5d86e7a92-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.966652 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e25bc49e-1bbe-4103-b751-fee5d86e7a92-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.966680 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e25bc49e-1bbe-4103-b751-fee5d86e7a92-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.966689 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e25bc49e-1bbe-4103-b751-fee5d86e7a92-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.966698 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e25bc49e-1bbe-4103-b751-fee5d86e7a92-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:13 crc kubenswrapper[4732]: I0131 09:20:13.966707 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54c9x\" (UniqueName: \"kubernetes.io/projected/e25bc49e-1bbe-4103-b751-fee5d86e7a92-kube-api-access-54c9x\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.260805 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 09:20:14 crc kubenswrapper[4732]: W0131 09:20:14.262862 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18b68f5e_a1b4_4f52_9a4e_5967735ec105.slice/crio-9213fe58caadec775e92ed28d6bba2a8ef3c6d5b541e8e1e305d2da7b48f0916 WatchSource:0}: Error finding container 9213fe58caadec775e92ed28d6bba2a8ef3c6d5b541e8e1e305d2da7b48f0916: Status 404 returned error can't find the container with id 9213fe58caadec775e92ed28d6bba2a8ef3c6d5b541e8e1e305d2da7b48f0916 Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.333075 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 09:20:14 crc kubenswrapper[4732]: W0131 09:20:14.344576 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb04e24b_fc92_4f2e_abcb_fa46706f699a.slice/crio-484b2ea8fc5f3e2a8486b0977aa570fbe958192eb24867f074d00e930ab4b1a3 WatchSource:0}: Error finding container 484b2ea8fc5f3e2a8486b0977aa570fbe958192eb24867f074d00e930ab4b1a3: Status 404 returned error can't find the container with id 484b2ea8fc5f3e2a8486b0977aa570fbe958192eb24867f074d00e930ab4b1a3 Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.503991 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"6e7852a660c0fcf3225ca272b8e2af7fc11735fb24d2d14460cde6438a7beb2d"} Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.504032 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"8f37816afb654d1e776d0d4eec1d440d5af5c0af6bb7163bf1822d00c7129008"} Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.504041 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"d31d074c83b389c8db51d7fd48db465e4e9e513ed6084f0c730eaa69444cb6c5"} Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.504049 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"6568c920e3e41f0ca77451bc255f08043b793e9b23cf5a92a349e3e4e75234e1"} Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.505644 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" event={"ID":"8cb5e63b-882d-4388-abb1-130923832c9f","Type":"ContainerStarted","Data":"4835492a2361cd167a0d1bc3cdf3c2b8f82043e40e4867b9666ee8ab6a6362e6"} Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.505693 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" event={"ID":"8cb5e63b-882d-4388-abb1-130923832c9f","Type":"ContainerStarted","Data":"783febaf4bf86ebe7b225017d95af46ce165aac17683180fbdfbb6b8367cdec5"} Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.505707 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" event={"ID":"8cb5e63b-882d-4388-abb1-130923832c9f","Type":"ContainerStarted","Data":"32df1a9318cd9e4682742a9570f27db68b7c7b206dbaec4cd560f06f827fb57e"} Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.505795 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.505826 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.507016 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"484b2ea8fc5f3e2a8486b0977aa570fbe958192eb24867f074d00e930ab4b1a3"} Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.508438 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"9213fe58caadec775e92ed28d6bba2a8ef3c6d5b541e8e1e305d2da7b48f0916"} Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.514221 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" event={"ID":"e25bc49e-1bbe-4103-b751-fee5d86e7a92","Type":"ContainerDied","Data":"2e9feb5a7bd4dc52e920699e4073a986cd013aa621986002fa31f113c2896401"} Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.514253 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e9feb5a7bd4dc52e920699e4073a986cd013aa621986002fa31f113c2896401" Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.514330 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-7q9kp" Jan 31 09:20:14 crc kubenswrapper[4732]: I0131 09:20:14.537675 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" podStartSLOduration=17.537636681 podStartE2EDuration="17.537636681s" podCreationTimestamp="2026-01-31 09:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:20:14.53181473 +0000 UTC m=+1152.837690934" watchObservedRunningTime="2026-01-31 09:20:14.537636681 +0000 UTC m=+1152.843512885" Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.540265 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"498871caf245c1aa003224c568a9512c564fb67d15571cad0c2fa636e92058ab"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.540326 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"caafb470bfca795c3c7f1107e8a9e97700d33c459a35140d551bfc62e596d5a6"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.540342 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"647e5cb59a1fa496ac972ece6ee175c218e028f20f65b8bd02f6b5c3d914e53c"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.540355 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"1a43f64c141cb300a8ec9eb5a477bb5c1933446a32abc193cb5173f92ba21178"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.540370 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"a00d79844fd0811e549149a48676d54976a695ed7a1497b1125e383f358fdc3c"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.546166 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"672f5feadfa214c6a0a943e614b3f6be2205caf0cfad789bc48eecd5126064c9"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.546268 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"8fa21222e4cd9c26bf7a8699721788dc5f3bcbad179ebfca9e1ff5dd4f7eca9f"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.546290 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"72926ae802ba11d13ab77c79539bfcc94151936d207f632232dd7d1f41349d6d"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.546304 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"fa17ab717c5754facbf3da31cb4afd3477236bd426c65c9122642f27b5886fb9"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.546361 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"caac1f6da96d02d4cab51830f080cd652323af90c1836570aef955dde0207cff"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.552629 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"655a856811f2b949bc2c94495ba00a853818330f4ec09b9aeffd1751288aff79"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.552693 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"61d667d399369be98c22e8ccc9f3b4cb18a2a2b489b34f036b2fc88c0e71e429"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.552706 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"039961fef86b21e927d5ebdf1e2eb67c58775d812f1269411f3f0411c895a43f"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.552718 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"ada6f8ad69317a784118495c96beb36d701ce7b36209a86b9a936d7cc110e3c2"} Jan 31 09:20:15 crc kubenswrapper[4732]: I0131 09:20:15.552730 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"8d252efa39cfce425e89f4670019c9177c3b0c93daee21298b31748b98d341f0"} Jan 31 09:20:16 crc kubenswrapper[4732]: I0131 09:20:16.563417 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"b50b39b84c6087e4699184efc06a407c325204a64a826bce844fec4c5164858e"} Jan 31 09:20:16 crc kubenswrapper[4732]: I0131 09:20:16.564732 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"3a8ebcfcb039bb39617fcee0e053ba202cf3bccf0891372d90f2647abbe5c1ea"} Jan 31 09:20:16 crc kubenswrapper[4732]: I0131 09:20:16.564826 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"420022f2e68e290b989b3165de84484bcc24a80cd43dfb94fa6e26433aff9a55"} Jan 31 09:20:16 crc kubenswrapper[4732]: I0131 09:20:16.564887 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"fd00da0aa47cdd0410c40f7add08c30b0950cfafc21a202b2619d006f368871b"} Jan 31 09:20:16 crc kubenswrapper[4732]: I0131 09:20:16.564941 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"78f74f08eb824105edcd5f1f9501b3fa915219d39b763fb391f67d7c1c054292"} Jan 31 09:20:16 crc kubenswrapper[4732]: I0131 09:20:16.587982 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"61eb8b513a113fbabe0d9a8fc9c197f8afac97222ddfaaa712ed87f070443521"} Jan 31 09:20:16 crc kubenswrapper[4732]: I0131 09:20:16.588113 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"c9a37bdf8b63464e6bd6756605e3b1a2408e5416d00cda661d4d274b38bd7be7"} Jan 31 09:20:16 crc kubenswrapper[4732]: I0131 09:20:16.588171 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"bde9eaf63bb5ab51e11bacddaba56989bae7d6031fa19e2c1b77f92866dadea8"} Jan 31 09:20:16 crc kubenswrapper[4732]: I0131 09:20:16.588225 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"c9736add9e20e2e6d821daa6edc69baaef51fa7d36762e678ee4128cf4df9dbd"} Jan 31 09:20:16 crc kubenswrapper[4732]: I0131 09:20:16.615536 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"463912ed98b787656c483ddcf4b73e963647c6d5df261cfe3fad78758d6d1b9d"} Jan 31 09:20:16 crc kubenswrapper[4732]: I0131 09:20:16.615577 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"7ce880a4ac662620fb475dd18995992d32b8be418497d54beba8a09c1335c1aa"} Jan 31 09:20:16 crc kubenswrapper[4732]: I0131 09:20:16.615587 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"54fada92647ba12febbb920093f9bc8e7464da78204240a5a00d06baf380ab69"} Jan 31 09:20:16 crc kubenswrapper[4732]: I0131 09:20:16.615595 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"fd1347f500390657be8ce2ee2c537ab8800d5072a87c360ae87e57f1fb6a1882"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.498081 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.498392 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.498434 4732 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.499023 4732 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99271a603de3a603b9be8d8f0bb791de0f202646de27403a5e3efc59790f637f"} pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.499078 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" containerID="cri-o://99271a603de3a603b9be8d8f0bb791de0f202646de27403a5e3efc59790f637f" gracePeriod=600 Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.627043 4732 generic.go:334] "Generic (PLEG): container finished" podID="7d790207-d357-4b47-87bf-5b505e061820" containerID="99271a603de3a603b9be8d8f0bb791de0f202646de27403a5e3efc59790f637f" exitCode=0 Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.627111 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerDied","Data":"99271a603de3a603b9be8d8f0bb791de0f202646de27403a5e3efc59790f637f"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.627163 4732 scope.go:117] "RemoveContainer" containerID="7aa4af54c816ede2288939b5eacffccc23edb9afc7e2a36ef42fe01d52b4ae91" Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.636524 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"4591c4a657bf0f19e528acc7cb5586ad0a5dffe6072302f778c6b3daec4b890e"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.636584 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"9bd8439c812d4db4031eaf201411d9f03f68a7e4fb620cd284c1316ed57ea8e9"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.636599 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"f7740f3734b2a6d8e15826537feeb63febadea03ccc4bc0d6f70700f6a75626f"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.636612 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"22026d373e04df27564ec7eb67c62bdb36d5f926aefc6470f05958ca3bfafb6d"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.636624 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"36c3a43f98d1be3fa4067ddd849e1f8adb09f6830fb90e1e103b4fee901b5ddb"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.636636 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerStarted","Data":"3bedc7b7578ebe9948019c2fe597f44f31339a72681e217b76908ee8888b902f"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.649580 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"261d59a3c1b5662c1245d9e0a26d6fdf5a756126f782cf23384b915262209828"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.649649 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"01d34a0e2eb5b105a83439a771538234555a443b0c13ee4cb087d83ae6e5a172"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.649679 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"caaaa31e0459d9ecc2e8474a50fcbdb26784a11af19cecd7546e523bc30b1bb1"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.649692 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"3c487459ef32558f2157dbf1c508505bc4dd091b7590c38ba37b19a62a7f80b1"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.649707 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"a8e090887d88b45d44e7f952ad72621d08fce059d8e6698d9ad0a38d3082acc9"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.649717 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerStarted","Data":"3b885dbb98c02c11a228e346c436a702bd2867b63763347fde6a9f9094c1d016"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.657440 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"186ff48385c2286a698108d59672234436dfc7dbb5bac5e777070affa544b217"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.657487 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerStarted","Data":"f7bb7853880d63c47b634d75defb148b4ed41cf77de6d39f02020df2ff5b03d3"} Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.674781 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-1" podStartSLOduration=21.674763849 podStartE2EDuration="21.674763849s" podCreationTimestamp="2026-01-31 09:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:20:17.671355673 +0000 UTC m=+1155.977231877" watchObservedRunningTime="2026-01-31 09:20:17.674763849 +0000 UTC m=+1155.980640053" Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.714480 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-2" podStartSLOduration=21.714465738 podStartE2EDuration="21.714465738s" podCreationTimestamp="2026-01-31 09:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:20:17.710283059 +0000 UTC m=+1156.016159263" watchObservedRunningTime="2026-01-31 09:20:17.714465738 +0000 UTC m=+1156.020341942" Jan 31 09:20:17 crc kubenswrapper[4732]: I0131 09:20:17.759931 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=21.759910175999998 podStartE2EDuration="21.759910176s" podCreationTimestamp="2026-01-31 09:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:20:17.753118505 +0000 UTC m=+1156.058994719" watchObservedRunningTime="2026-01-31 09:20:17.759910176 +0000 UTC m=+1156.065786380" Jan 31 09:20:18 crc kubenswrapper[4732]: I0131 09:20:18.299699 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:20:18 crc kubenswrapper[4732]: I0131 09:20:18.667077 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerStarted","Data":"777b6bb11b5556f90e1c2a08822928a50217112bcd9efce47de0d5e1a98e3392"} Jan 31 09:20:23 crc kubenswrapper[4732]: I0131 09:20:23.296480 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.693840 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-95zn5"] Jan 31 09:20:24 crc kubenswrapper[4732]: E0131 09:20:24.694235 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25bc49e-1bbe-4103-b751-fee5d86e7a92" containerName="swift-ring-rebalance" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.694252 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25bc49e-1bbe-4103-b751-fee5d86e7a92" containerName="swift-ring-rebalance" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.694429 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="e25bc49e-1bbe-4103-b751-fee5d86e7a92" containerName="swift-ring-rebalance" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.695004 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.697649 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.697752 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.709240 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-95zn5"] Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.754007 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/957b624e-aeb2-4942-bf7d-9ee57f9d8462-dispersionconf\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.754060 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/957b624e-aeb2-4942-bf7d-9ee57f9d8462-scripts\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.754076 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/957b624e-aeb2-4942-bf7d-9ee57f9d8462-etc-swift\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.754325 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/957b624e-aeb2-4942-bf7d-9ee57f9d8462-swiftconf\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.754524 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/957b624e-aeb2-4942-bf7d-9ee57f9d8462-ring-data-devices\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.754576 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzwl8\" (UniqueName: \"kubernetes.io/projected/957b624e-aeb2-4942-bf7d-9ee57f9d8462-kube-api-access-lzwl8\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.856237 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/957b624e-aeb2-4942-bf7d-9ee57f9d8462-scripts\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.856276 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/957b624e-aeb2-4942-bf7d-9ee57f9d8462-etc-swift\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.856320 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/957b624e-aeb2-4942-bf7d-9ee57f9d8462-swiftconf\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.856386 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/957b624e-aeb2-4942-bf7d-9ee57f9d8462-ring-data-devices\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.856405 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzwl8\" (UniqueName: \"kubernetes.io/projected/957b624e-aeb2-4942-bf7d-9ee57f9d8462-kube-api-access-lzwl8\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.856423 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/957b624e-aeb2-4942-bf7d-9ee57f9d8462-dispersionconf\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.857128 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/957b624e-aeb2-4942-bf7d-9ee57f9d8462-ring-data-devices\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.857196 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/957b624e-aeb2-4942-bf7d-9ee57f9d8462-scripts\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.857352 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/957b624e-aeb2-4942-bf7d-9ee57f9d8462-etc-swift\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.861992 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/957b624e-aeb2-4942-bf7d-9ee57f9d8462-swiftconf\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.862314 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/957b624e-aeb2-4942-bf7d-9ee57f9d8462-dispersionconf\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:24 crc kubenswrapper[4732]: I0131 09:20:24.880860 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzwl8\" (UniqueName: \"kubernetes.io/projected/957b624e-aeb2-4942-bf7d-9ee57f9d8462-kube-api-access-lzwl8\") pod \"swift-ring-rebalance-debug-95zn5\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:25 crc kubenswrapper[4732]: I0131 09:20:25.023026 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:25 crc kubenswrapper[4732]: I0131 09:20:25.476351 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-95zn5"] Jan 31 09:20:25 crc kubenswrapper[4732]: W0131 09:20:25.477207 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod957b624e_aeb2_4942_bf7d_9ee57f9d8462.slice/crio-bb4258966ea064b1f1ecc473c6e6e539698b32e5021c92e78612f7004423ee2f WatchSource:0}: Error finding container bb4258966ea064b1f1ecc473c6e6e539698b32e5021c92e78612f7004423ee2f: Status 404 returned error can't find the container with id bb4258966ea064b1f1ecc473c6e6e539698b32e5021c92e78612f7004423ee2f Jan 31 09:20:25 crc kubenswrapper[4732]: I0131 09:20:25.741707 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" event={"ID":"957b624e-aeb2-4942-bf7d-9ee57f9d8462","Type":"ContainerStarted","Data":"5094c199168505dd687c51c34f8737c8e1d1a70fa2bf148748212d9f385b755e"} Jan 31 09:20:25 crc kubenswrapper[4732]: I0131 09:20:25.741953 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" event={"ID":"957b624e-aeb2-4942-bf7d-9ee57f9d8462","Type":"ContainerStarted","Data":"bb4258966ea064b1f1ecc473c6e6e539698b32e5021c92e78612f7004423ee2f"} Jan 31 09:20:25 crc kubenswrapper[4732]: I0131 09:20:25.762470 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" podStartSLOduration=1.762443922 podStartE2EDuration="1.762443922s" podCreationTimestamp="2026-01-31 09:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:20:25.755099384 +0000 UTC m=+1164.060975588" watchObservedRunningTime="2026-01-31 09:20:25.762443922 +0000 UTC m=+1164.068320166" Jan 31 09:20:28 crc kubenswrapper[4732]: I0131 09:20:28.768792 4732 generic.go:334] "Generic (PLEG): container finished" podID="957b624e-aeb2-4942-bf7d-9ee57f9d8462" containerID="5094c199168505dd687c51c34f8737c8e1d1a70fa2bf148748212d9f385b755e" exitCode=0 Jan 31 09:20:28 crc kubenswrapper[4732]: I0131 09:20:28.768842 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" event={"ID":"957b624e-aeb2-4942-bf7d-9ee57f9d8462","Type":"ContainerDied","Data":"5094c199168505dd687c51c34f8737c8e1d1a70fa2bf148748212d9f385b755e"} Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.035506 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.067816 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-95zn5"] Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.075059 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-95zn5"] Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.139633 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/957b624e-aeb2-4942-bf7d-9ee57f9d8462-dispersionconf\") pod \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.139766 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/957b624e-aeb2-4942-bf7d-9ee57f9d8462-swiftconf\") pod \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.139798 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/957b624e-aeb2-4942-bf7d-9ee57f9d8462-ring-data-devices\") pod \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.139864 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/957b624e-aeb2-4942-bf7d-9ee57f9d8462-etc-swift\") pod \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.139905 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzwl8\" (UniqueName: \"kubernetes.io/projected/957b624e-aeb2-4942-bf7d-9ee57f9d8462-kube-api-access-lzwl8\") pod \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.140452 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957b624e-aeb2-4942-bf7d-9ee57f9d8462-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "957b624e-aeb2-4942-bf7d-9ee57f9d8462" (UID: "957b624e-aeb2-4942-bf7d-9ee57f9d8462"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.140545 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/957b624e-aeb2-4942-bf7d-9ee57f9d8462-scripts\") pod \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\" (UID: \"957b624e-aeb2-4942-bf7d-9ee57f9d8462\") " Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.140680 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/957b624e-aeb2-4942-bf7d-9ee57f9d8462-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "957b624e-aeb2-4942-bf7d-9ee57f9d8462" (UID: "957b624e-aeb2-4942-bf7d-9ee57f9d8462"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.140909 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/957b624e-aeb2-4942-bf7d-9ee57f9d8462-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.140926 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/957b624e-aeb2-4942-bf7d-9ee57f9d8462-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.145281 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/957b624e-aeb2-4942-bf7d-9ee57f9d8462-kube-api-access-lzwl8" (OuterVolumeSpecName: "kube-api-access-lzwl8") pod "957b624e-aeb2-4942-bf7d-9ee57f9d8462" (UID: "957b624e-aeb2-4942-bf7d-9ee57f9d8462"). InnerVolumeSpecName "kube-api-access-lzwl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.160388 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957b624e-aeb2-4942-bf7d-9ee57f9d8462-scripts" (OuterVolumeSpecName: "scripts") pod "957b624e-aeb2-4942-bf7d-9ee57f9d8462" (UID: "957b624e-aeb2-4942-bf7d-9ee57f9d8462"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.165114 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957b624e-aeb2-4942-bf7d-9ee57f9d8462-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "957b624e-aeb2-4942-bf7d-9ee57f9d8462" (UID: "957b624e-aeb2-4942-bf7d-9ee57f9d8462"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.171315 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957b624e-aeb2-4942-bf7d-9ee57f9d8462-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "957b624e-aeb2-4942-bf7d-9ee57f9d8462" (UID: "957b624e-aeb2-4942-bf7d-9ee57f9d8462"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.226684 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h"] Jan 31 09:20:30 crc kubenswrapper[4732]: E0131 09:20:30.227003 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="957b624e-aeb2-4942-bf7d-9ee57f9d8462" containerName="swift-ring-rebalance" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.227024 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="957b624e-aeb2-4942-bf7d-9ee57f9d8462" containerName="swift-ring-rebalance" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.227310 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="957b624e-aeb2-4942-bf7d-9ee57f9d8462" containerName="swift-ring-rebalance" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.228119 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.236288 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h"] Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.241956 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/957b624e-aeb2-4942-bf7d-9ee57f9d8462-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.241992 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/957b624e-aeb2-4942-bf7d-9ee57f9d8462-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.242004 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/957b624e-aeb2-4942-bf7d-9ee57f9d8462-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.242016 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzwl8\" (UniqueName: \"kubernetes.io/projected/957b624e-aeb2-4942-bf7d-9ee57f9d8462-kube-api-access-lzwl8\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.343527 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1add18f8-8147-4b68-ba76-f331c3e04734-ring-data-devices\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.343595 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1add18f8-8147-4b68-ba76-f331c3e04734-dispersionconf\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.343630 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zrzf\" (UniqueName: \"kubernetes.io/projected/1add18f8-8147-4b68-ba76-f331c3e04734-kube-api-access-6zrzf\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.343702 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1add18f8-8147-4b68-ba76-f331c3e04734-swiftconf\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.343722 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1add18f8-8147-4b68-ba76-f331c3e04734-scripts\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.343749 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1add18f8-8147-4b68-ba76-f331c3e04734-etc-swift\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.444468 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1add18f8-8147-4b68-ba76-f331c3e04734-ring-data-devices\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.444545 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1add18f8-8147-4b68-ba76-f331c3e04734-dispersionconf\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.444579 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zrzf\" (UniqueName: \"kubernetes.io/projected/1add18f8-8147-4b68-ba76-f331c3e04734-kube-api-access-6zrzf\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.444624 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1add18f8-8147-4b68-ba76-f331c3e04734-swiftconf\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.444640 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1add18f8-8147-4b68-ba76-f331c3e04734-scripts\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.444687 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1add18f8-8147-4b68-ba76-f331c3e04734-etc-swift\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.445289 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1add18f8-8147-4b68-ba76-f331c3e04734-etc-swift\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.445344 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1add18f8-8147-4b68-ba76-f331c3e04734-scripts\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.445857 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1add18f8-8147-4b68-ba76-f331c3e04734-ring-data-devices\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.448298 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1add18f8-8147-4b68-ba76-f331c3e04734-dispersionconf\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.456194 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1add18f8-8147-4b68-ba76-f331c3e04734-swiftconf\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.460013 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zrzf\" (UniqueName: \"kubernetes.io/projected/1add18f8-8147-4b68-ba76-f331c3e04734-kube-api-access-6zrzf\") pod \"swift-ring-rebalance-debug-7rl5h\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.554377 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="957b624e-aeb2-4942-bf7d-9ee57f9d8462" path="/var/lib/kubelet/pods/957b624e-aeb2-4942-bf7d-9ee57f9d8462/volumes" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.557765 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.784756 4732 scope.go:117] "RemoveContainer" containerID="5094c199168505dd687c51c34f8737c8e1d1a70fa2bf148748212d9f385b755e" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.784812 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-95zn5" Jan 31 09:20:30 crc kubenswrapper[4732]: I0131 09:20:30.964965 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h"] Jan 31 09:20:31 crc kubenswrapper[4732]: I0131 09:20:31.792980 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" event={"ID":"1add18f8-8147-4b68-ba76-f331c3e04734","Type":"ContainerStarted","Data":"1afc72dc67226efea88d83cce600c96a720e1c283b758b093373dafe9a1d70f3"} Jan 31 09:20:31 crc kubenswrapper[4732]: I0131 09:20:31.793287 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" event={"ID":"1add18f8-8147-4b68-ba76-f331c3e04734","Type":"ContainerStarted","Data":"c32eab1c06cf9ce26317b383c741c8bb9495ea22d9e847ac0336329acc318f8a"} Jan 31 09:20:31 crc kubenswrapper[4732]: I0131 09:20:31.811011 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" podStartSLOduration=1.810993319 podStartE2EDuration="1.810993319s" podCreationTimestamp="2026-01-31 09:20:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:20:31.809225634 +0000 UTC m=+1170.115101838" watchObservedRunningTime="2026-01-31 09:20:31.810993319 +0000 UTC m=+1170.116869523" Jan 31 09:20:33 crc kubenswrapper[4732]: I0131 09:20:33.809645 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" event={"ID":"1add18f8-8147-4b68-ba76-f331c3e04734","Type":"ContainerDied","Data":"1afc72dc67226efea88d83cce600c96a720e1c283b758b093373dafe9a1d70f3"} Jan 31 09:20:33 crc kubenswrapper[4732]: I0131 09:20:33.809654 4732 generic.go:334] "Generic (PLEG): container finished" podID="1add18f8-8147-4b68-ba76-f331c3e04734" containerID="1afc72dc67226efea88d83cce600c96a720e1c283b758b093373dafe9a1d70f3" exitCode=0 Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.132121 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.161592 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h"] Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.168340 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h"] Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.211213 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1add18f8-8147-4b68-ba76-f331c3e04734-ring-data-devices\") pod \"1add18f8-8147-4b68-ba76-f331c3e04734\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.211298 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1add18f8-8147-4b68-ba76-f331c3e04734-scripts\") pod \"1add18f8-8147-4b68-ba76-f331c3e04734\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.211327 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zrzf\" (UniqueName: \"kubernetes.io/projected/1add18f8-8147-4b68-ba76-f331c3e04734-kube-api-access-6zrzf\") pod \"1add18f8-8147-4b68-ba76-f331c3e04734\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.211360 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1add18f8-8147-4b68-ba76-f331c3e04734-swiftconf\") pod \"1add18f8-8147-4b68-ba76-f331c3e04734\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.211395 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1add18f8-8147-4b68-ba76-f331c3e04734-etc-swift\") pod \"1add18f8-8147-4b68-ba76-f331c3e04734\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.211423 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1add18f8-8147-4b68-ba76-f331c3e04734-dispersionconf\") pod \"1add18f8-8147-4b68-ba76-f331c3e04734\" (UID: \"1add18f8-8147-4b68-ba76-f331c3e04734\") " Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.212945 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1add18f8-8147-4b68-ba76-f331c3e04734-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "1add18f8-8147-4b68-ba76-f331c3e04734" (UID: "1add18f8-8147-4b68-ba76-f331c3e04734"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.213352 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1add18f8-8147-4b68-ba76-f331c3e04734-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1add18f8-8147-4b68-ba76-f331c3e04734" (UID: "1add18f8-8147-4b68-ba76-f331c3e04734"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.217071 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1add18f8-8147-4b68-ba76-f331c3e04734-kube-api-access-6zrzf" (OuterVolumeSpecName: "kube-api-access-6zrzf") pod "1add18f8-8147-4b68-ba76-f331c3e04734" (UID: "1add18f8-8147-4b68-ba76-f331c3e04734"). InnerVolumeSpecName "kube-api-access-6zrzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.229965 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1add18f8-8147-4b68-ba76-f331c3e04734-scripts" (OuterVolumeSpecName: "scripts") pod "1add18f8-8147-4b68-ba76-f331c3e04734" (UID: "1add18f8-8147-4b68-ba76-f331c3e04734"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.232858 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1add18f8-8147-4b68-ba76-f331c3e04734-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "1add18f8-8147-4b68-ba76-f331c3e04734" (UID: "1add18f8-8147-4b68-ba76-f331c3e04734"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.237927 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1add18f8-8147-4b68-ba76-f331c3e04734-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "1add18f8-8147-4b68-ba76-f331c3e04734" (UID: "1add18f8-8147-4b68-ba76-f331c3e04734"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.312730 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1add18f8-8147-4b68-ba76-f331c3e04734-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.312764 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zrzf\" (UniqueName: \"kubernetes.io/projected/1add18f8-8147-4b68-ba76-f331c3e04734-kube-api-access-6zrzf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.312779 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1add18f8-8147-4b68-ba76-f331c3e04734-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.312793 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1add18f8-8147-4b68-ba76-f331c3e04734-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.312804 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1add18f8-8147-4b68-ba76-f331c3e04734-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.312817 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1add18f8-8147-4b68-ba76-f331c3e04734-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.506228 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-fcl52"] Jan 31 09:20:35 crc kubenswrapper[4732]: E0131 09:20:35.506732 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1add18f8-8147-4b68-ba76-f331c3e04734" containerName="swift-ring-rebalance" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.506748 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1add18f8-8147-4b68-ba76-f331c3e04734" containerName="swift-ring-rebalance" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.506930 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1add18f8-8147-4b68-ba76-f331c3e04734" containerName="swift-ring-rebalance" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.507459 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.535276 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-fcl52"] Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.618408 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwjhc\" (UniqueName: \"kubernetes.io/projected/f60c0633-b625-41ee-9547-276007d47773-kube-api-access-pwjhc\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.618486 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f60c0633-b625-41ee-9547-276007d47773-scripts\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.618621 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f60c0633-b625-41ee-9547-276007d47773-etc-swift\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.618703 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f60c0633-b625-41ee-9547-276007d47773-swiftconf\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.618732 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f60c0633-b625-41ee-9547-276007d47773-dispersionconf\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.618776 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f60c0633-b625-41ee-9547-276007d47773-ring-data-devices\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.720109 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f60c0633-b625-41ee-9547-276007d47773-swiftconf\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.720365 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f60c0633-b625-41ee-9547-276007d47773-dispersionconf\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.720466 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f60c0633-b625-41ee-9547-276007d47773-ring-data-devices\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.720614 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwjhc\" (UniqueName: \"kubernetes.io/projected/f60c0633-b625-41ee-9547-276007d47773-kube-api-access-pwjhc\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.720740 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f60c0633-b625-41ee-9547-276007d47773-scripts\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.720871 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f60c0633-b625-41ee-9547-276007d47773-etc-swift\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.721307 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f60c0633-b625-41ee-9547-276007d47773-etc-swift\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.721518 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f60c0633-b625-41ee-9547-276007d47773-ring-data-devices\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.721524 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f60c0633-b625-41ee-9547-276007d47773-scripts\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.724119 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f60c0633-b625-41ee-9547-276007d47773-swiftconf\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.724641 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f60c0633-b625-41ee-9547-276007d47773-dispersionconf\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.738739 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwjhc\" (UniqueName: \"kubernetes.io/projected/f60c0633-b625-41ee-9547-276007d47773-kube-api-access-pwjhc\") pod \"swift-ring-rebalance-debug-fcl52\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.825265 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c32eab1c06cf9ce26317b383c741c8bb9495ea22d9e847ac0336329acc318f8a" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.825347 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7rl5h" Jan 31 09:20:35 crc kubenswrapper[4732]: I0131 09:20:35.841275 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:36 crc kubenswrapper[4732]: I0131 09:20:36.045169 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-fcl52"] Jan 31 09:20:36 crc kubenswrapper[4732]: W0131 09:20:36.045713 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf60c0633_b625_41ee_9547_276007d47773.slice/crio-cd90a06411a178d46fc5b2088e3445ee49d4f2f23efb76c48e04889c786b81de WatchSource:0}: Error finding container cd90a06411a178d46fc5b2088e3445ee49d4f2f23efb76c48e04889c786b81de: Status 404 returned error can't find the container with id cd90a06411a178d46fc5b2088e3445ee49d4f2f23efb76c48e04889c786b81de Jan 31 09:20:36 crc kubenswrapper[4732]: I0131 09:20:36.561740 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1add18f8-8147-4b68-ba76-f331c3e04734" path="/var/lib/kubelet/pods/1add18f8-8147-4b68-ba76-f331c3e04734/volumes" Jan 31 09:20:36 crc kubenswrapper[4732]: I0131 09:20:36.835357 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" event={"ID":"f60c0633-b625-41ee-9547-276007d47773","Type":"ContainerStarted","Data":"29c732c7d142dfca9f679c8ac3af3f06cbb08a8c2b215575b2a3b5e1d907c9bb"} Jan 31 09:20:36 crc kubenswrapper[4732]: I0131 09:20:36.835409 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" event={"ID":"f60c0633-b625-41ee-9547-276007d47773","Type":"ContainerStarted","Data":"cd90a06411a178d46fc5b2088e3445ee49d4f2f23efb76c48e04889c786b81de"} Jan 31 09:20:36 crc kubenswrapper[4732]: I0131 09:20:36.865097 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" podStartSLOduration=1.8650802579999999 podStartE2EDuration="1.865080258s" podCreationTimestamp="2026-01-31 09:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:20:36.861964743 +0000 UTC m=+1175.167840967" watchObservedRunningTime="2026-01-31 09:20:36.865080258 +0000 UTC m=+1175.170956462" Jan 31 09:20:37 crc kubenswrapper[4732]: I0131 09:20:37.844824 4732 generic.go:334] "Generic (PLEG): container finished" podID="f60c0633-b625-41ee-9547-276007d47773" containerID="29c732c7d142dfca9f679c8ac3af3f06cbb08a8c2b215575b2a3b5e1d907c9bb" exitCode=0 Jan 31 09:20:37 crc kubenswrapper[4732]: I0131 09:20:37.844960 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" event={"ID":"f60c0633-b625-41ee-9547-276007d47773","Type":"ContainerDied","Data":"29c732c7d142dfca9f679c8ac3af3f06cbb08a8c2b215575b2a3b5e1d907c9bb"} Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.239980 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.278699 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f60c0633-b625-41ee-9547-276007d47773-etc-swift\") pod \"f60c0633-b625-41ee-9547-276007d47773\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.278822 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f60c0633-b625-41ee-9547-276007d47773-swiftconf\") pod \"f60c0633-b625-41ee-9547-276007d47773\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.278910 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f60c0633-b625-41ee-9547-276007d47773-dispersionconf\") pod \"f60c0633-b625-41ee-9547-276007d47773\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.278947 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f60c0633-b625-41ee-9547-276007d47773-scripts\") pod \"f60c0633-b625-41ee-9547-276007d47773\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.278968 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwjhc\" (UniqueName: \"kubernetes.io/projected/f60c0633-b625-41ee-9547-276007d47773-kube-api-access-pwjhc\") pod \"f60c0633-b625-41ee-9547-276007d47773\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.279002 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f60c0633-b625-41ee-9547-276007d47773-ring-data-devices\") pod \"f60c0633-b625-41ee-9547-276007d47773\" (UID: \"f60c0633-b625-41ee-9547-276007d47773\") " Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.279510 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f60c0633-b625-41ee-9547-276007d47773-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f60c0633-b625-41ee-9547-276007d47773" (UID: "f60c0633-b625-41ee-9547-276007d47773"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.279749 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f60c0633-b625-41ee-9547-276007d47773-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f60c0633-b625-41ee-9547-276007d47773" (UID: "f60c0633-b625-41ee-9547-276007d47773"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.300132 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-fcl52"] Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.303862 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60c0633-b625-41ee-9547-276007d47773-kube-api-access-pwjhc" (OuterVolumeSpecName: "kube-api-access-pwjhc") pod "f60c0633-b625-41ee-9547-276007d47773" (UID: "f60c0633-b625-41ee-9547-276007d47773"). InnerVolumeSpecName "kube-api-access-pwjhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.305631 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60c0633-b625-41ee-9547-276007d47773-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f60c0633-b625-41ee-9547-276007d47773" (UID: "f60c0633-b625-41ee-9547-276007d47773"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.307834 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-fcl52"] Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.318306 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f60c0633-b625-41ee-9547-276007d47773-scripts" (OuterVolumeSpecName: "scripts") pod "f60c0633-b625-41ee-9547-276007d47773" (UID: "f60c0633-b625-41ee-9547-276007d47773"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.326501 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60c0633-b625-41ee-9547-276007d47773-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f60c0633-b625-41ee-9547-276007d47773" (UID: "f60c0633-b625-41ee-9547-276007d47773"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.380603 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f60c0633-b625-41ee-9547-276007d47773-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.380635 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwjhc\" (UniqueName: \"kubernetes.io/projected/f60c0633-b625-41ee-9547-276007d47773-kube-api-access-pwjhc\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.380646 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f60c0633-b625-41ee-9547-276007d47773-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.380654 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f60c0633-b625-41ee-9547-276007d47773-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.380680 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f60c0633-b625-41ee-9547-276007d47773-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.380689 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f60c0633-b625-41ee-9547-276007d47773-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.865004 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd90a06411a178d46fc5b2088e3445ee49d4f2f23efb76c48e04889c786b81de" Jan 31 09:20:39 crc kubenswrapper[4732]: I0131 09:20:39.865094 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-fcl52" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.447931 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6"] Jan 31 09:20:40 crc kubenswrapper[4732]: E0131 09:20:40.448617 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60c0633-b625-41ee-9547-276007d47773" containerName="swift-ring-rebalance" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.448634 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60c0633-b625-41ee-9547-276007d47773" containerName="swift-ring-rebalance" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.448833 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60c0633-b625-41ee-9547-276007d47773" containerName="swift-ring-rebalance" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.449424 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.451753 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.454510 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.470059 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6"] Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.496647 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-etc-swift\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.496730 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-scripts\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.496937 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-ring-data-devices\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.496994 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-swiftconf\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.497034 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gbpc\" (UniqueName: \"kubernetes.io/projected/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-kube-api-access-5gbpc\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.497074 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-dispersionconf\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.554538 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60c0633-b625-41ee-9547-276007d47773" path="/var/lib/kubelet/pods/f60c0633-b625-41ee-9547-276007d47773/volumes" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.598387 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-etc-swift\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.598455 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-scripts\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.598528 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-ring-data-devices\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.598553 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-swiftconf\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.598588 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gbpc\" (UniqueName: \"kubernetes.io/projected/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-kube-api-access-5gbpc\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.598621 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-dispersionconf\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.598936 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-etc-swift\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.599333 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-ring-data-devices\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.599391 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-scripts\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.602236 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-dispersionconf\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.603590 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-swiftconf\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.615991 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gbpc\" (UniqueName: \"kubernetes.io/projected/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-kube-api-access-5gbpc\") pod \"swift-ring-rebalance-debug-6f8g6\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:40 crc kubenswrapper[4732]: I0131 09:20:40.764415 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:41 crc kubenswrapper[4732]: I0131 09:20:41.106070 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6"] Jan 31 09:20:41 crc kubenswrapper[4732]: W0131 09:20:41.109673 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7f454bc_bfe4_4d0f_b300_0a3b2f12f623.slice/crio-2f3db159d081f9cd8fc05d03a5d964ef0ea033b67d2d74b2f94372a6f829841e WatchSource:0}: Error finding container 2f3db159d081f9cd8fc05d03a5d964ef0ea033b67d2d74b2f94372a6f829841e: Status 404 returned error can't find the container with id 2f3db159d081f9cd8fc05d03a5d964ef0ea033b67d2d74b2f94372a6f829841e Jan 31 09:20:41 crc kubenswrapper[4732]: I0131 09:20:41.930392 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" event={"ID":"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623","Type":"ContainerStarted","Data":"b32e75e7ececb21d07a3a37d4719e75555ba099ce802819a06e6d312b37e5f13"} Jan 31 09:20:41 crc kubenswrapper[4732]: I0131 09:20:41.930809 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" event={"ID":"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623","Type":"ContainerStarted","Data":"2f3db159d081f9cd8fc05d03a5d964ef0ea033b67d2d74b2f94372a6f829841e"} Jan 31 09:20:41 crc kubenswrapper[4732]: I0131 09:20:41.950390 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" podStartSLOduration=1.950367516 podStartE2EDuration="1.950367516s" podCreationTimestamp="2026-01-31 09:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:20:41.947471627 +0000 UTC m=+1180.253347831" watchObservedRunningTime="2026-01-31 09:20:41.950367516 +0000 UTC m=+1180.256243730" Jan 31 09:20:42 crc kubenswrapper[4732]: I0131 09:20:42.943247 4732 generic.go:334] "Generic (PLEG): container finished" podID="c7f454bc-bfe4-4d0f-b300-0a3b2f12f623" containerID="b32e75e7ececb21d07a3a37d4719e75555ba099ce802819a06e6d312b37e5f13" exitCode=0 Jan 31 09:20:42 crc kubenswrapper[4732]: I0131 09:20:42.943357 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" event={"ID":"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623","Type":"ContainerDied","Data":"b32e75e7ececb21d07a3a37d4719e75555ba099ce802819a06e6d312b37e5f13"} Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.226102 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.264397 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-etc-swift\") pod \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.264880 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-swiftconf\") pod \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.264943 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-ring-data-devices\") pod \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.265012 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-scripts\") pod \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.265052 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-dispersionconf\") pod \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.265140 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c7f454bc-bfe4-4d0f-b300-0a3b2f12f623" (UID: "c7f454bc-bfe4-4d0f-b300-0a3b2f12f623"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.265243 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gbpc\" (UniqueName: \"kubernetes.io/projected/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-kube-api-access-5gbpc\") pod \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\" (UID: \"c7f454bc-bfe4-4d0f-b300-0a3b2f12f623\") " Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.265400 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c7f454bc-bfe4-4d0f-b300-0a3b2f12f623" (UID: "c7f454bc-bfe4-4d0f-b300-0a3b2f12f623"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.265686 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.265722 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.269718 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6"] Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.274782 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6"] Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.291989 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-kube-api-access-5gbpc" (OuterVolumeSpecName: "kube-api-access-5gbpc") pod "c7f454bc-bfe4-4d0f-b300-0a3b2f12f623" (UID: "c7f454bc-bfe4-4d0f-b300-0a3b2f12f623"). InnerVolumeSpecName "kube-api-access-5gbpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.295558 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-scripts" (OuterVolumeSpecName: "scripts") pod "c7f454bc-bfe4-4d0f-b300-0a3b2f12f623" (UID: "c7f454bc-bfe4-4d0f-b300-0a3b2f12f623"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.297501 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c7f454bc-bfe4-4d0f-b300-0a3b2f12f623" (UID: "c7f454bc-bfe4-4d0f-b300-0a3b2f12f623"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.302283 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c7f454bc-bfe4-4d0f-b300-0a3b2f12f623" (UID: "c7f454bc-bfe4-4d0f-b300-0a3b2f12f623"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.366958 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.366999 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.367014 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gbpc\" (UniqueName: \"kubernetes.io/projected/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-kube-api-access-5gbpc\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.367029 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.558489 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f454bc-bfe4-4d0f-b300-0a3b2f12f623" path="/var/lib/kubelet/pods/c7f454bc-bfe4-4d0f-b300-0a3b2f12f623/volumes" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.961730 4732 scope.go:117] "RemoveContainer" containerID="b32e75e7ececb21d07a3a37d4719e75555ba099ce802819a06e6d312b37e5f13" Jan 31 09:20:44 crc kubenswrapper[4732]: I0131 09:20:44.961750 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-6f8g6" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.397281 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh"] Jan 31 09:20:45 crc kubenswrapper[4732]: E0131 09:20:45.397644 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f454bc-bfe4-4d0f-b300-0a3b2f12f623" containerName="swift-ring-rebalance" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.397673 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f454bc-bfe4-4d0f-b300-0a3b2f12f623" containerName="swift-ring-rebalance" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.397891 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f454bc-bfe4-4d0f-b300-0a3b2f12f623" containerName="swift-ring-rebalance" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.398484 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.404050 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.404066 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.413084 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh"] Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.482110 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8fb6eab1-4604-4066-ab41-f102cf79889e-dispersionconf\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.482365 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fb6eab1-4604-4066-ab41-f102cf79889e-scripts\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.482421 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8fb6eab1-4604-4066-ab41-f102cf79889e-ring-data-devices\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.482445 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8fb6eab1-4604-4066-ab41-f102cf79889e-etc-swift\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.482460 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mc4q\" (UniqueName: \"kubernetes.io/projected/8fb6eab1-4604-4066-ab41-f102cf79889e-kube-api-access-6mc4q\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.482522 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8fb6eab1-4604-4066-ab41-f102cf79889e-swiftconf\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.583978 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8fb6eab1-4604-4066-ab41-f102cf79889e-etc-swift\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.584015 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mc4q\" (UniqueName: \"kubernetes.io/projected/8fb6eab1-4604-4066-ab41-f102cf79889e-kube-api-access-6mc4q\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.584062 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8fb6eab1-4604-4066-ab41-f102cf79889e-swiftconf\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.584128 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8fb6eab1-4604-4066-ab41-f102cf79889e-dispersionconf\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.584166 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fb6eab1-4604-4066-ab41-f102cf79889e-scripts\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.584217 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8fb6eab1-4604-4066-ab41-f102cf79889e-ring-data-devices\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.585099 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8fb6eab1-4604-4066-ab41-f102cf79889e-ring-data-devices\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.585117 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fb6eab1-4604-4066-ab41-f102cf79889e-scripts\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.585649 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8fb6eab1-4604-4066-ab41-f102cf79889e-etc-swift\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.588790 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8fb6eab1-4604-4066-ab41-f102cf79889e-swiftconf\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.589325 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8fb6eab1-4604-4066-ab41-f102cf79889e-dispersionconf\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.601830 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mc4q\" (UniqueName: \"kubernetes.io/projected/8fb6eab1-4604-4066-ab41-f102cf79889e-kube-api-access-6mc4q\") pod \"swift-ring-rebalance-debug-xdhsh\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:45 crc kubenswrapper[4732]: I0131 09:20:45.719444 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:46 crc kubenswrapper[4732]: I0131 09:20:46.134453 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh"] Jan 31 09:20:46 crc kubenswrapper[4732]: W0131 09:20:46.148369 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fb6eab1_4604_4066_ab41_f102cf79889e.slice/crio-8bbe4227d09313bee386c292dc466bffc18322a4585e9a2d48b5996659f4326c WatchSource:0}: Error finding container 8bbe4227d09313bee386c292dc466bffc18322a4585e9a2d48b5996659f4326c: Status 404 returned error can't find the container with id 8bbe4227d09313bee386c292dc466bffc18322a4585e9a2d48b5996659f4326c Jan 31 09:20:46 crc kubenswrapper[4732]: I0131 09:20:46.986654 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" event={"ID":"8fb6eab1-4604-4066-ab41-f102cf79889e","Type":"ContainerStarted","Data":"5e80e6297ed4cb6407583fc9a3cefb0d406579b8e762ae1e494844cf4e75d919"} Jan 31 09:20:46 crc kubenswrapper[4732]: I0131 09:20:46.987189 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" event={"ID":"8fb6eab1-4604-4066-ab41-f102cf79889e","Type":"ContainerStarted","Data":"8bbe4227d09313bee386c292dc466bffc18322a4585e9a2d48b5996659f4326c"} Jan 31 09:20:47 crc kubenswrapper[4732]: I0131 09:20:47.012699 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" podStartSLOduration=2.012644021 podStartE2EDuration="2.012644021s" podCreationTimestamp="2026-01-31 09:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:20:47.01034006 +0000 UTC m=+1185.316216264" watchObservedRunningTime="2026-01-31 09:20:47.012644021 +0000 UTC m=+1185.318520225" Jan 31 09:20:48 crc kubenswrapper[4732]: I0131 09:20:47.999174 4732 generic.go:334] "Generic (PLEG): container finished" podID="8fb6eab1-4604-4066-ab41-f102cf79889e" containerID="5e80e6297ed4cb6407583fc9a3cefb0d406579b8e762ae1e494844cf4e75d919" exitCode=0 Jan 31 09:20:48 crc kubenswrapper[4732]: I0131 09:20:47.999426 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" event={"ID":"8fb6eab1-4604-4066-ab41-f102cf79889e","Type":"ContainerDied","Data":"5e80e6297ed4cb6407583fc9a3cefb0d406579b8e762ae1e494844cf4e75d919"} Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.424877 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.458510 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh"] Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.482462 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh"] Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.552515 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mc4q\" (UniqueName: \"kubernetes.io/projected/8fb6eab1-4604-4066-ab41-f102cf79889e-kube-api-access-6mc4q\") pod \"8fb6eab1-4604-4066-ab41-f102cf79889e\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.552595 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8fb6eab1-4604-4066-ab41-f102cf79889e-swiftconf\") pod \"8fb6eab1-4604-4066-ab41-f102cf79889e\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.552685 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8fb6eab1-4604-4066-ab41-f102cf79889e-ring-data-devices\") pod \"8fb6eab1-4604-4066-ab41-f102cf79889e\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.552706 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8fb6eab1-4604-4066-ab41-f102cf79889e-dispersionconf\") pod \"8fb6eab1-4604-4066-ab41-f102cf79889e\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.552745 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fb6eab1-4604-4066-ab41-f102cf79889e-scripts\") pod \"8fb6eab1-4604-4066-ab41-f102cf79889e\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.552792 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8fb6eab1-4604-4066-ab41-f102cf79889e-etc-swift\") pod \"8fb6eab1-4604-4066-ab41-f102cf79889e\" (UID: \"8fb6eab1-4604-4066-ab41-f102cf79889e\") " Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.553531 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb6eab1-4604-4066-ab41-f102cf79889e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8fb6eab1-4604-4066-ab41-f102cf79889e" (UID: "8fb6eab1-4604-4066-ab41-f102cf79889e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.553722 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb6eab1-4604-4066-ab41-f102cf79889e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8fb6eab1-4604-4066-ab41-f102cf79889e" (UID: "8fb6eab1-4604-4066-ab41-f102cf79889e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.574648 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb6eab1-4604-4066-ab41-f102cf79889e-scripts" (OuterVolumeSpecName: "scripts") pod "8fb6eab1-4604-4066-ab41-f102cf79889e" (UID: "8fb6eab1-4604-4066-ab41-f102cf79889e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.574939 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb6eab1-4604-4066-ab41-f102cf79889e-kube-api-access-6mc4q" (OuterVolumeSpecName: "kube-api-access-6mc4q") pod "8fb6eab1-4604-4066-ab41-f102cf79889e" (UID: "8fb6eab1-4604-4066-ab41-f102cf79889e"). InnerVolumeSpecName "kube-api-access-6mc4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.600972 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb6eab1-4604-4066-ab41-f102cf79889e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8fb6eab1-4604-4066-ab41-f102cf79889e" (UID: "8fb6eab1-4604-4066-ab41-f102cf79889e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.607786 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb6eab1-4604-4066-ab41-f102cf79889e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8fb6eab1-4604-4066-ab41-f102cf79889e" (UID: "8fb6eab1-4604-4066-ab41-f102cf79889e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.654349 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fb6eab1-4604-4066-ab41-f102cf79889e-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.654398 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8fb6eab1-4604-4066-ab41-f102cf79889e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.654414 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mc4q\" (UniqueName: \"kubernetes.io/projected/8fb6eab1-4604-4066-ab41-f102cf79889e-kube-api-access-6mc4q\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.654429 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8fb6eab1-4604-4066-ab41-f102cf79889e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.654440 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8fb6eab1-4604-4066-ab41-f102cf79889e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:49 crc kubenswrapper[4732]: I0131 09:20:49.654452 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8fb6eab1-4604-4066-ab41-f102cf79889e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.023815 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bbe4227d09313bee386c292dc466bffc18322a4585e9a2d48b5996659f4326c" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.024213 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xdhsh" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.558050 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb6eab1-4604-4066-ab41-f102cf79889e" path="/var/lib/kubelet/pods/8fb6eab1-4604-4066-ab41-f102cf79889e/volumes" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.600475 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk"] Jan 31 09:20:50 crc kubenswrapper[4732]: E0131 09:20:50.600854 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb6eab1-4604-4066-ab41-f102cf79889e" containerName="swift-ring-rebalance" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.600878 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb6eab1-4604-4066-ab41-f102cf79889e" containerName="swift-ring-rebalance" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.601046 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb6eab1-4604-4066-ab41-f102cf79889e" containerName="swift-ring-rebalance" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.601606 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.603942 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.604283 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.620511 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk"] Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.669703 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzddp\" (UniqueName: \"kubernetes.io/projected/fa92affd-0106-4b01-b96c-8f2b0459ee3a-kube-api-access-nzddp\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.669831 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa92affd-0106-4b01-b96c-8f2b0459ee3a-etc-swift\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.669914 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa92affd-0106-4b01-b96c-8f2b0459ee3a-ring-data-devices\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.669943 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa92affd-0106-4b01-b96c-8f2b0459ee3a-swiftconf\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.669996 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa92affd-0106-4b01-b96c-8f2b0459ee3a-scripts\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.670028 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa92affd-0106-4b01-b96c-8f2b0459ee3a-dispersionconf\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.771867 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa92affd-0106-4b01-b96c-8f2b0459ee3a-swiftconf\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.772187 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa92affd-0106-4b01-b96c-8f2b0459ee3a-scripts\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.772215 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa92affd-0106-4b01-b96c-8f2b0459ee3a-dispersionconf\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.772266 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzddp\" (UniqueName: \"kubernetes.io/projected/fa92affd-0106-4b01-b96c-8f2b0459ee3a-kube-api-access-nzddp\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.772316 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa92affd-0106-4b01-b96c-8f2b0459ee3a-etc-swift\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.772345 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa92affd-0106-4b01-b96c-8f2b0459ee3a-ring-data-devices\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.772853 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa92affd-0106-4b01-b96c-8f2b0459ee3a-etc-swift\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.773192 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa92affd-0106-4b01-b96c-8f2b0459ee3a-ring-data-devices\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.773438 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa92affd-0106-4b01-b96c-8f2b0459ee3a-scripts\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.779888 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa92affd-0106-4b01-b96c-8f2b0459ee3a-dispersionconf\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.784533 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa92affd-0106-4b01-b96c-8f2b0459ee3a-swiftconf\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.794078 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzddp\" (UniqueName: \"kubernetes.io/projected/fa92affd-0106-4b01-b96c-8f2b0459ee3a-kube-api-access-nzddp\") pod \"swift-ring-rebalance-debug-hwbrk\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:50 crc kubenswrapper[4732]: I0131 09:20:50.931837 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:51 crc kubenswrapper[4732]: I0131 09:20:51.382473 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk"] Jan 31 09:20:51 crc kubenswrapper[4732]: W0131 09:20:51.396195 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa92affd_0106_4b01_b96c_8f2b0459ee3a.slice/crio-bb4c230a2e8424b4f24a0318146a3797047e6098acdddebc6bb41b881f5f6f2c WatchSource:0}: Error finding container bb4c230a2e8424b4f24a0318146a3797047e6098acdddebc6bb41b881f5f6f2c: Status 404 returned error can't find the container with id bb4c230a2e8424b4f24a0318146a3797047e6098acdddebc6bb41b881f5f6f2c Jan 31 09:20:52 crc kubenswrapper[4732]: I0131 09:20:52.040273 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" event={"ID":"fa92affd-0106-4b01-b96c-8f2b0459ee3a","Type":"ContainerStarted","Data":"63aa4ab687f026724734c9f9bad3c34ed0dcd405198734688082bd1c838517d3"} Jan 31 09:20:52 crc kubenswrapper[4732]: I0131 09:20:52.040827 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" event={"ID":"fa92affd-0106-4b01-b96c-8f2b0459ee3a","Type":"ContainerStarted","Data":"bb4c230a2e8424b4f24a0318146a3797047e6098acdddebc6bb41b881f5f6f2c"} Jan 31 09:20:52 crc kubenswrapper[4732]: I0131 09:20:52.058467 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" podStartSLOduration=2.058450885 podStartE2EDuration="2.058450885s" podCreationTimestamp="2026-01-31 09:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:20:52.055956728 +0000 UTC m=+1190.361832942" watchObservedRunningTime="2026-01-31 09:20:52.058450885 +0000 UTC m=+1190.364327089" Jan 31 09:20:53 crc kubenswrapper[4732]: I0131 09:20:53.048344 4732 generic.go:334] "Generic (PLEG): container finished" podID="fa92affd-0106-4b01-b96c-8f2b0459ee3a" containerID="63aa4ab687f026724734c9f9bad3c34ed0dcd405198734688082bd1c838517d3" exitCode=0 Jan 31 09:20:53 crc kubenswrapper[4732]: I0131 09:20:53.048381 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" event={"ID":"fa92affd-0106-4b01-b96c-8f2b0459ee3a","Type":"ContainerDied","Data":"63aa4ab687f026724734c9f9bad3c34ed0dcd405198734688082bd1c838517d3"} Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.423435 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.458834 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk"] Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.469296 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk"] Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.534394 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa92affd-0106-4b01-b96c-8f2b0459ee3a-swiftconf\") pod \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.534481 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzddp\" (UniqueName: \"kubernetes.io/projected/fa92affd-0106-4b01-b96c-8f2b0459ee3a-kube-api-access-nzddp\") pod \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.534519 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa92affd-0106-4b01-b96c-8f2b0459ee3a-ring-data-devices\") pod \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.534622 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa92affd-0106-4b01-b96c-8f2b0459ee3a-etc-swift\") pod \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.534753 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa92affd-0106-4b01-b96c-8f2b0459ee3a-scripts\") pod \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.534876 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa92affd-0106-4b01-b96c-8f2b0459ee3a-dispersionconf\") pod \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\" (UID: \"fa92affd-0106-4b01-b96c-8f2b0459ee3a\") " Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.537135 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa92affd-0106-4b01-b96c-8f2b0459ee3a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "fa92affd-0106-4b01-b96c-8f2b0459ee3a" (UID: "fa92affd-0106-4b01-b96c-8f2b0459ee3a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.537692 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa92affd-0106-4b01-b96c-8f2b0459ee3a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "fa92affd-0106-4b01-b96c-8f2b0459ee3a" (UID: "fa92affd-0106-4b01-b96c-8f2b0459ee3a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.541445 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa92affd-0106-4b01-b96c-8f2b0459ee3a-kube-api-access-nzddp" (OuterVolumeSpecName: "kube-api-access-nzddp") pod "fa92affd-0106-4b01-b96c-8f2b0459ee3a" (UID: "fa92affd-0106-4b01-b96c-8f2b0459ee3a"). InnerVolumeSpecName "kube-api-access-nzddp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.562004 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa92affd-0106-4b01-b96c-8f2b0459ee3a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "fa92affd-0106-4b01-b96c-8f2b0459ee3a" (UID: "fa92affd-0106-4b01-b96c-8f2b0459ee3a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.576391 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa92affd-0106-4b01-b96c-8f2b0459ee3a-scripts" (OuterVolumeSpecName: "scripts") pod "fa92affd-0106-4b01-b96c-8f2b0459ee3a" (UID: "fa92affd-0106-4b01-b96c-8f2b0459ee3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.603850 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa92affd-0106-4b01-b96c-8f2b0459ee3a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "fa92affd-0106-4b01-b96c-8f2b0459ee3a" (UID: "fa92affd-0106-4b01-b96c-8f2b0459ee3a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.639018 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fa92affd-0106-4b01-b96c-8f2b0459ee3a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.639057 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa92affd-0106-4b01-b96c-8f2b0459ee3a-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.639072 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fa92affd-0106-4b01-b96c-8f2b0459ee3a-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.639085 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fa92affd-0106-4b01-b96c-8f2b0459ee3a-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.639099 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzddp\" (UniqueName: \"kubernetes.io/projected/fa92affd-0106-4b01-b96c-8f2b0459ee3a-kube-api-access-nzddp\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.639131 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fa92affd-0106-4b01-b96c-8f2b0459ee3a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.642205 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.642250 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.642263 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-7q9kp"] Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.642634 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-server" containerID="cri-o://a00d79844fd0811e549149a48676d54976a695ed7a1497b1125e383f358fdc3c" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643053 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-server" containerID="cri-o://d31d074c83b389c8db51d7fd48db465e4e9e513ed6084f0c730eaa69444cb6c5" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643336 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="swift-recon-cron" containerID="cri-o://4591c4a657bf0f19e528acc7cb5586ad0a5dffe6072302f778c6b3daec4b890e" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643382 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="rsync" containerID="cri-o://9bd8439c812d4db4031eaf201411d9f03f68a7e4fb620cd284c1316ed57ea8e9" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643420 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-expirer" containerID="cri-o://f7740f3734b2a6d8e15826537feeb63febadea03ccc4bc0d6f70700f6a75626f" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643450 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-updater" containerID="cri-o://22026d373e04df27564ec7eb67c62bdb36d5f926aefc6470f05958ca3bfafb6d" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643482 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-auditor" containerID="cri-o://36c3a43f98d1be3fa4067ddd849e1f8adb09f6830fb90e1e103b4fee901b5ddb" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643510 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-replicator" containerID="cri-o://3bedc7b7578ebe9948019c2fe597f44f31339a72681e217b76908ee8888b902f" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643557 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-server" containerID="cri-o://61eb8b513a113fbabe0d9a8fc9c197f8afac97222ddfaaa712ed87f070443521" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643590 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-updater" containerID="cri-o://c9a37bdf8b63464e6bd6756605e3b1a2408e5416d00cda661d4d274b38bd7be7" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643622 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-auditor" containerID="cri-o://bde9eaf63bb5ab51e11bacddaba56989bae7d6031fa19e2c1b77f92866dadea8" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643654 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-replicator" containerID="cri-o://c9736add9e20e2e6d821daa6edc69baaef51fa7d36762e678ee4128cf4df9dbd" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643704 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-server" containerID="cri-o://498871caf245c1aa003224c568a9512c564fb67d15571cad0c2fa636e92058ab" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643734 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-reaper" containerID="cri-o://caafb470bfca795c3c7f1107e8a9e97700d33c459a35140d551bfc62e596d5a6" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643762 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-auditor" containerID="cri-o://647e5cb59a1fa496ac972ece6ee175c218e028f20f65b8bd02f6b5c3d914e53c" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.643796 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-replicator" containerID="cri-o://1a43f64c141cb300a8ec9eb5a477bb5c1933446a32abc193cb5173f92ba21178" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.644287 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="swift-recon-cron" containerID="cri-o://186ff48385c2286a698108d59672234436dfc7dbb5bac5e777070affa544b217" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.644343 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="rsync" containerID="cri-o://f7bb7853880d63c47b634d75defb148b4ed41cf77de6d39f02020df2ff5b03d3" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.644382 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-expirer" containerID="cri-o://b50b39b84c6087e4699184efc06a407c325204a64a826bce844fec4c5164858e" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.644427 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-updater" containerID="cri-o://3a8ebcfcb039bb39617fcee0e053ba202cf3bccf0891372d90f2647abbe5c1ea" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.644462 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-auditor" containerID="cri-o://420022f2e68e290b989b3165de84484bcc24a80cd43dfb94fa6e26433aff9a55" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.644495 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-replicator" containerID="cri-o://fd00da0aa47cdd0410c40f7add08c30b0950cfafc21a202b2619d006f368871b" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.644527 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-server" containerID="cri-o://78f74f08eb824105edcd5f1f9501b3fa915219d39b763fb391f67d7c1c054292" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.644561 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-updater" containerID="cri-o://655a856811f2b949bc2c94495ba00a853818330f4ec09b9aeffd1751288aff79" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.644598 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-auditor" containerID="cri-o://61d667d399369be98c22e8ccc9f3b4cb18a2a2b489b34f036b2fc88c0e71e429" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.644632 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-replicator" containerID="cri-o://039961fef86b21e927d5ebdf1e2eb67c58775d812f1269411f3f0411c895a43f" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.644741 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-server" containerID="cri-o://ada6f8ad69317a784118495c96beb36d701ce7b36209a86b9a936d7cc110e3c2" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.644993 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-reaper" containerID="cri-o://8d252efa39cfce425e89f4670019c9177c3b0c93daee21298b31748b98d341f0" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.645071 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-auditor" containerID="cri-o://6e7852a660c0fcf3225ca272b8e2af7fc11735fb24d2d14460cde6438a7beb2d" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.645109 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-replicator" containerID="cri-o://8f37816afb654d1e776d0d4eec1d440d5af5c0af6bb7163bf1822d00c7129008" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.646875 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647241 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-server" containerID="cri-o://caac1f6da96d02d4cab51830f080cd652323af90c1836570aef955dde0207cff" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647302 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="swift-recon-cron" containerID="cri-o://261d59a3c1b5662c1245d9e0a26d6fdf5a756126f782cf23384b915262209828" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647333 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="rsync" containerID="cri-o://01d34a0e2eb5b105a83439a771538234555a443b0c13ee4cb087d83ae6e5a172" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647362 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-expirer" containerID="cri-o://caaaa31e0459d9ecc2e8474a50fcbdb26784a11af19cecd7546e523bc30b1bb1" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647392 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-updater" containerID="cri-o://3c487459ef32558f2157dbf1c508505bc4dd091b7590c38ba37b19a62a7f80b1" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647423 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-auditor" containerID="cri-o://a8e090887d88b45d44e7f952ad72621d08fce059d8e6698d9ad0a38d3082acc9" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647453 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-replicator" containerID="cri-o://3b885dbb98c02c11a228e346c436a702bd2867b63763347fde6a9f9094c1d016" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647482 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-server" containerID="cri-o://463912ed98b787656c483ddcf4b73e963647c6d5df261cfe3fad78758d6d1b9d" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647510 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-updater" containerID="cri-o://7ce880a4ac662620fb475dd18995992d32b8be418497d54beba8a09c1335c1aa" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647539 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-auditor" containerID="cri-o://54fada92647ba12febbb920093f9bc8e7464da78204240a5a00d06baf380ab69" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647568 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-replicator" containerID="cri-o://fd1347f500390657be8ce2ee2c537ab8800d5072a87c360ae87e57f1fb6a1882" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647598 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-server" containerID="cri-o://672f5feadfa214c6a0a943e614b3f6be2205caf0cfad789bc48eecd5126064c9" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647627 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-reaper" containerID="cri-o://8fa21222e4cd9c26bf7a8699721788dc5f3bcbad179ebfca9e1ff5dd4f7eca9f" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647673 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-auditor" containerID="cri-o://72926ae802ba11d13ab77c79539bfcc94151936d207f632232dd7d1f41349d6d" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.647705 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-replicator" containerID="cri-o://fa17ab717c5754facbf3da31cb4afd3477236bd426c65c9122642f27b5886fb9" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.651985 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-7q9kp"] Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.671946 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq"] Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.672165 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" podUID="8cb5e63b-882d-4388-abb1-130923832c9f" containerName="proxy-httpd" containerID="cri-o://783febaf4bf86ebe7b225017d95af46ce165aac17683180fbdfbb6b8367cdec5" gracePeriod=30 Jan 31 09:20:54 crc kubenswrapper[4732]: I0131 09:20:54.672306 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" podUID="8cb5e63b-882d-4388-abb1-130923832c9f" containerName="proxy-server" containerID="cri-o://4835492a2361cd167a0d1bc3cdf3c2b8f82043e40e4867b9666ee8ab6a6362e6" gracePeriod=30 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070321 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="f7740f3734b2a6d8e15826537feeb63febadea03ccc4bc0d6f70700f6a75626f" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070356 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="22026d373e04df27564ec7eb67c62bdb36d5f926aefc6470f05958ca3bfafb6d" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070366 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="36c3a43f98d1be3fa4067ddd849e1f8adb09f6830fb90e1e103b4fee901b5ddb" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070374 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="3bedc7b7578ebe9948019c2fe597f44f31339a72681e217b76908ee8888b902f" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070380 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="c9a37bdf8b63464e6bd6756605e3b1a2408e5416d00cda661d4d274b38bd7be7" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070387 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="bde9eaf63bb5ab51e11bacddaba56989bae7d6031fa19e2c1b77f92866dadea8" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070393 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="c9736add9e20e2e6d821daa6edc69baaef51fa7d36762e678ee4128cf4df9dbd" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070400 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="caafb470bfca795c3c7f1107e8a9e97700d33c459a35140d551bfc62e596d5a6" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070407 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="647e5cb59a1fa496ac972ece6ee175c218e028f20f65b8bd02f6b5c3d914e53c" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070413 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="1a43f64c141cb300a8ec9eb5a477bb5c1933446a32abc193cb5173f92ba21178" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070421 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="a00d79844fd0811e549149a48676d54976a695ed7a1497b1125e383f358fdc3c" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070395 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"f7740f3734b2a6d8e15826537feeb63febadea03ccc4bc0d6f70700f6a75626f"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070480 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"22026d373e04df27564ec7eb67c62bdb36d5f926aefc6470f05958ca3bfafb6d"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070494 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"36c3a43f98d1be3fa4067ddd849e1f8adb09f6830fb90e1e103b4fee901b5ddb"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070505 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"3bedc7b7578ebe9948019c2fe597f44f31339a72681e217b76908ee8888b902f"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070514 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"c9a37bdf8b63464e6bd6756605e3b1a2408e5416d00cda661d4d274b38bd7be7"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070524 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"bde9eaf63bb5ab51e11bacddaba56989bae7d6031fa19e2c1b77f92866dadea8"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070533 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"c9736add9e20e2e6d821daa6edc69baaef51fa7d36762e678ee4128cf4df9dbd"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070541 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"caafb470bfca795c3c7f1107e8a9e97700d33c459a35140d551bfc62e596d5a6"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070550 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"647e5cb59a1fa496ac972ece6ee175c218e028f20f65b8bd02f6b5c3d914e53c"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070559 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"1a43f64c141cb300a8ec9eb5a477bb5c1933446a32abc193cb5173f92ba21178"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.070567 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"a00d79844fd0811e549149a48676d54976a695ed7a1497b1125e383f358fdc3c"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.075868 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="caaaa31e0459d9ecc2e8474a50fcbdb26784a11af19cecd7546e523bc30b1bb1" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.075909 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="3c487459ef32558f2157dbf1c508505bc4dd091b7590c38ba37b19a62a7f80b1" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.075921 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="a8e090887d88b45d44e7f952ad72621d08fce059d8e6698d9ad0a38d3082acc9" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.075932 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="3b885dbb98c02c11a228e346c436a702bd2867b63763347fde6a9f9094c1d016" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.075941 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="7ce880a4ac662620fb475dd18995992d32b8be418497d54beba8a09c1335c1aa" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.075930 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"caaaa31e0459d9ecc2e8474a50fcbdb26784a11af19cecd7546e523bc30b1bb1"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.075986 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"3c487459ef32558f2157dbf1c508505bc4dd091b7590c38ba37b19a62a7f80b1"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.076000 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"a8e090887d88b45d44e7f952ad72621d08fce059d8e6698d9ad0a38d3082acc9"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.076012 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"3b885dbb98c02c11a228e346c436a702bd2867b63763347fde6a9f9094c1d016"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.076026 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"7ce880a4ac662620fb475dd18995992d32b8be418497d54beba8a09c1335c1aa"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.075950 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="54fada92647ba12febbb920093f9bc8e7464da78204240a5a00d06baf380ab69" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.076048 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="fd1347f500390657be8ce2ee2c537ab8800d5072a87c360ae87e57f1fb6a1882" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.076061 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="8fa21222e4cd9c26bf7a8699721788dc5f3bcbad179ebfca9e1ff5dd4f7eca9f" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.076069 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="72926ae802ba11d13ab77c79539bfcc94151936d207f632232dd7d1f41349d6d" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.076077 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="fa17ab717c5754facbf3da31cb4afd3477236bd426c65c9122642f27b5886fb9" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.076037 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"54fada92647ba12febbb920093f9bc8e7464da78204240a5a00d06baf380ab69"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.076126 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"fd1347f500390657be8ce2ee2c537ab8800d5072a87c360ae87e57f1fb6a1882"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.076138 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"8fa21222e4cd9c26bf7a8699721788dc5f3bcbad179ebfca9e1ff5dd4f7eca9f"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.076148 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"72926ae802ba11d13ab77c79539bfcc94151936d207f632232dd7d1f41349d6d"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.076159 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"fa17ab717c5754facbf3da31cb4afd3477236bd426c65c9122642f27b5886fb9"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084260 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="b50b39b84c6087e4699184efc06a407c325204a64a826bce844fec4c5164858e" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084287 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="3a8ebcfcb039bb39617fcee0e053ba202cf3bccf0891372d90f2647abbe5c1ea" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084294 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="420022f2e68e290b989b3165de84484bcc24a80cd43dfb94fa6e26433aff9a55" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084302 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="fd00da0aa47cdd0410c40f7add08c30b0950cfafc21a202b2619d006f368871b" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084313 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="78f74f08eb824105edcd5f1f9501b3fa915219d39b763fb391f67d7c1c054292" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084319 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="655a856811f2b949bc2c94495ba00a853818330f4ec09b9aeffd1751288aff79" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084326 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="61d667d399369be98c22e8ccc9f3b4cb18a2a2b489b34f036b2fc88c0e71e429" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084333 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="039961fef86b21e927d5ebdf1e2eb67c58775d812f1269411f3f0411c895a43f" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084339 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="ada6f8ad69317a784118495c96beb36d701ce7b36209a86b9a936d7cc110e3c2" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084345 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="8d252efa39cfce425e89f4670019c9177c3b0c93daee21298b31748b98d341f0" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084351 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="6e7852a660c0fcf3225ca272b8e2af7fc11735fb24d2d14460cde6438a7beb2d" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084357 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="8f37816afb654d1e776d0d4eec1d440d5af5c0af6bb7163bf1822d00c7129008" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084399 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"b50b39b84c6087e4699184efc06a407c325204a64a826bce844fec4c5164858e"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084426 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"3a8ebcfcb039bb39617fcee0e053ba202cf3bccf0891372d90f2647abbe5c1ea"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084441 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"420022f2e68e290b989b3165de84484bcc24a80cd43dfb94fa6e26433aff9a55"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084453 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"fd00da0aa47cdd0410c40f7add08c30b0950cfafc21a202b2619d006f368871b"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084462 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"78f74f08eb824105edcd5f1f9501b3fa915219d39b763fb391f67d7c1c054292"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084470 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"655a856811f2b949bc2c94495ba00a853818330f4ec09b9aeffd1751288aff79"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084478 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"61d667d399369be98c22e8ccc9f3b4cb18a2a2b489b34f036b2fc88c0e71e429"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084486 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"039961fef86b21e927d5ebdf1e2eb67c58775d812f1269411f3f0411c895a43f"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084494 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"ada6f8ad69317a784118495c96beb36d701ce7b36209a86b9a936d7cc110e3c2"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084502 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"8d252efa39cfce425e89f4670019c9177c3b0c93daee21298b31748b98d341f0"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084512 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"6e7852a660c0fcf3225ca272b8e2af7fc11735fb24d2d14460cde6438a7beb2d"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.084520 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"8f37816afb654d1e776d0d4eec1d440d5af5c0af6bb7163bf1822d00c7129008"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.091553 4732 generic.go:334] "Generic (PLEG): container finished" podID="8cb5e63b-882d-4388-abb1-130923832c9f" containerID="783febaf4bf86ebe7b225017d95af46ce165aac17683180fbdfbb6b8367cdec5" exitCode=0 Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.091623 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" event={"ID":"8cb5e63b-882d-4388-abb1-130923832c9f","Type":"ContainerDied","Data":"783febaf4bf86ebe7b225017d95af46ce165aac17683180fbdfbb6b8367cdec5"} Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.093536 4732 scope.go:117] "RemoveContainer" containerID="63aa4ab687f026724734c9f9bad3c34ed0dcd405198734688082bd1c838517d3" Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.093582 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hwbrk" Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.461030 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.556055 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift\") pod \"8cb5e63b-882d-4388-abb1-130923832c9f\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.556113 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cb5e63b-882d-4388-abb1-130923832c9f-run-httpd\") pod \"8cb5e63b-882d-4388-abb1-130923832c9f\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.556140 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cb5e63b-882d-4388-abb1-130923832c9f-log-httpd\") pod \"8cb5e63b-882d-4388-abb1-130923832c9f\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.556169 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb5e63b-882d-4388-abb1-130923832c9f-config-data\") pod \"8cb5e63b-882d-4388-abb1-130923832c9f\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.556207 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n98z\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-kube-api-access-5n98z\") pod \"8cb5e63b-882d-4388-abb1-130923832c9f\" (UID: \"8cb5e63b-882d-4388-abb1-130923832c9f\") " Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.557521 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb5e63b-882d-4388-abb1-130923832c9f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8cb5e63b-882d-4388-abb1-130923832c9f" (UID: "8cb5e63b-882d-4388-abb1-130923832c9f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.557764 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb5e63b-882d-4388-abb1-130923832c9f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8cb5e63b-882d-4388-abb1-130923832c9f" (UID: "8cb5e63b-882d-4388-abb1-130923832c9f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.560897 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-kube-api-access-5n98z" (OuterVolumeSpecName: "kube-api-access-5n98z") pod "8cb5e63b-882d-4388-abb1-130923832c9f" (UID: "8cb5e63b-882d-4388-abb1-130923832c9f"). InnerVolumeSpecName "kube-api-access-5n98z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.562276 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8cb5e63b-882d-4388-abb1-130923832c9f" (UID: "8cb5e63b-882d-4388-abb1-130923832c9f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.605058 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cb5e63b-882d-4388-abb1-130923832c9f-config-data" (OuterVolumeSpecName: "config-data") pod "8cb5e63b-882d-4388-abb1-130923832c9f" (UID: "8cb5e63b-882d-4388-abb1-130923832c9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.658262 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.658300 4732 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cb5e63b-882d-4388-abb1-130923832c9f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.658312 4732 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cb5e63b-882d-4388-abb1-130923832c9f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.658325 4732 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb5e63b-882d-4388-abb1-130923832c9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:55 crc kubenswrapper[4732]: I0131 09:20:55.658336 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n98z\" (UniqueName: \"kubernetes.io/projected/8cb5e63b-882d-4388-abb1-130923832c9f-kube-api-access-5n98z\") on node \"crc\" DevicePath \"\"" Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.107476 4732 generic.go:334] "Generic (PLEG): container finished" podID="8cb5e63b-882d-4388-abb1-130923832c9f" containerID="4835492a2361cd167a0d1bc3cdf3c2b8f82043e40e4867b9666ee8ab6a6362e6" exitCode=0 Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.107540 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.107575 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" event={"ID":"8cb5e63b-882d-4388-abb1-130923832c9f","Type":"ContainerDied","Data":"4835492a2361cd167a0d1bc3cdf3c2b8f82043e40e4867b9666ee8ab6a6362e6"} Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.108226 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq" event={"ID":"8cb5e63b-882d-4388-abb1-130923832c9f","Type":"ContainerDied","Data":"32df1a9318cd9e4682742a9570f27db68b7c7b206dbaec4cd560f06f827fb57e"} Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.108261 4732 scope.go:117] "RemoveContainer" containerID="4835492a2361cd167a0d1bc3cdf3c2b8f82043e40e4867b9666ee8ab6a6362e6" Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.118312 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="01d34a0e2eb5b105a83439a771538234555a443b0c13ee4cb087d83ae6e5a172" exitCode=0 Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.118343 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="463912ed98b787656c483ddcf4b73e963647c6d5df261cfe3fad78758d6d1b9d" exitCode=0 Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.118352 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="672f5feadfa214c6a0a943e614b3f6be2205caf0cfad789bc48eecd5126064c9" exitCode=0 Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.118360 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="caac1f6da96d02d4cab51830f080cd652323af90c1836570aef955dde0207cff" exitCode=0 Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.118357 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"01d34a0e2eb5b105a83439a771538234555a443b0c13ee4cb087d83ae6e5a172"} Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.118423 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"463912ed98b787656c483ddcf4b73e963647c6d5df261cfe3fad78758d6d1b9d"} Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.118460 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"672f5feadfa214c6a0a943e614b3f6be2205caf0cfad789bc48eecd5126064c9"} Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.118481 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"caac1f6da96d02d4cab51830f080cd652323af90c1836570aef955dde0207cff"} Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.130336 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="9bd8439c812d4db4031eaf201411d9f03f68a7e4fb620cd284c1316ed57ea8e9" exitCode=0 Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.130366 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="61eb8b513a113fbabe0d9a8fc9c197f8afac97222ddfaaa712ed87f070443521" exitCode=0 Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.130373 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="498871caf245c1aa003224c568a9512c564fb67d15571cad0c2fa636e92058ab" exitCode=0 Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.130425 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"9bd8439c812d4db4031eaf201411d9f03f68a7e4fb620cd284c1316ed57ea8e9"} Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.130565 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"61eb8b513a113fbabe0d9a8fc9c197f8afac97222ddfaaa712ed87f070443521"} Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.130972 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"498871caf245c1aa003224c568a9512c564fb67d15571cad0c2fa636e92058ab"} Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.135212 4732 scope.go:117] "RemoveContainer" containerID="783febaf4bf86ebe7b225017d95af46ce165aac17683180fbdfbb6b8367cdec5" Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.147153 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="f7bb7853880d63c47b634d75defb148b4ed41cf77de6d39f02020df2ff5b03d3" exitCode=0 Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.147183 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="d31d074c83b389c8db51d7fd48db465e4e9e513ed6084f0c730eaa69444cb6c5" exitCode=0 Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.147211 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"f7bb7853880d63c47b634d75defb148b4ed41cf77de6d39f02020df2ff5b03d3"} Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.147241 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"d31d074c83b389c8db51d7fd48db465e4e9e513ed6084f0c730eaa69444cb6c5"} Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.150545 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq"] Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.159111 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-lvwdq"] Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.163813 4732 scope.go:117] "RemoveContainer" containerID="4835492a2361cd167a0d1bc3cdf3c2b8f82043e40e4867b9666ee8ab6a6362e6" Jan 31 09:20:56 crc kubenswrapper[4732]: E0131 09:20:56.164379 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4835492a2361cd167a0d1bc3cdf3c2b8f82043e40e4867b9666ee8ab6a6362e6\": container with ID starting with 4835492a2361cd167a0d1bc3cdf3c2b8f82043e40e4867b9666ee8ab6a6362e6 not found: ID does not exist" containerID="4835492a2361cd167a0d1bc3cdf3c2b8f82043e40e4867b9666ee8ab6a6362e6" Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.164437 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4835492a2361cd167a0d1bc3cdf3c2b8f82043e40e4867b9666ee8ab6a6362e6"} err="failed to get container status \"4835492a2361cd167a0d1bc3cdf3c2b8f82043e40e4867b9666ee8ab6a6362e6\": rpc error: code = NotFound desc = could not find container \"4835492a2361cd167a0d1bc3cdf3c2b8f82043e40e4867b9666ee8ab6a6362e6\": container with ID starting with 4835492a2361cd167a0d1bc3cdf3c2b8f82043e40e4867b9666ee8ab6a6362e6 not found: ID does not exist" Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.164470 4732 scope.go:117] "RemoveContainer" containerID="783febaf4bf86ebe7b225017d95af46ce165aac17683180fbdfbb6b8367cdec5" Jan 31 09:20:56 crc kubenswrapper[4732]: E0131 09:20:56.165123 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"783febaf4bf86ebe7b225017d95af46ce165aac17683180fbdfbb6b8367cdec5\": container with ID starting with 783febaf4bf86ebe7b225017d95af46ce165aac17683180fbdfbb6b8367cdec5 not found: ID does not exist" containerID="783febaf4bf86ebe7b225017d95af46ce165aac17683180fbdfbb6b8367cdec5" Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.165151 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783febaf4bf86ebe7b225017d95af46ce165aac17683180fbdfbb6b8367cdec5"} err="failed to get container status \"783febaf4bf86ebe7b225017d95af46ce165aac17683180fbdfbb6b8367cdec5\": rpc error: code = NotFound desc = could not find container \"783febaf4bf86ebe7b225017d95af46ce165aac17683180fbdfbb6b8367cdec5\": container with ID starting with 783febaf4bf86ebe7b225017d95af46ce165aac17683180fbdfbb6b8367cdec5 not found: ID does not exist" Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.555748 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cb5e63b-882d-4388-abb1-130923832c9f" path="/var/lib/kubelet/pods/8cb5e63b-882d-4388-abb1-130923832c9f/volumes" Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.557993 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e25bc49e-1bbe-4103-b751-fee5d86e7a92" path="/var/lib/kubelet/pods/e25bc49e-1bbe-4103-b751-fee5d86e7a92/volumes" Jan 31 09:20:56 crc kubenswrapper[4732]: I0131 09:20:56.558870 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa92affd-0106-4b01-b96c-8f2b0459ee3a" path="/var/lib/kubelet/pods/fa92affd-0106-4b01-b96c-8f2b0459ee3a/volumes" Jan 31 09:21:24 crc kubenswrapper[4732]: E0131 09:21:24.906988 4732 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea3117d7_0038_4ca5_bee5_ae76db9a12eb.slice/crio-conmon-186ff48385c2286a698108d59672234436dfc7dbb5bac5e777070affa544b217.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.128949 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.143212 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.145832 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249201 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249273 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift\") pod \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249332 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eb04e24b-fc92-4f2e-abcb-fa46706f699a-cache\") pod \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249364 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txhcm\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-kube-api-access-txhcm\") pod \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249388 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift\") pod \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249414 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zdfg\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-kube-api-access-9zdfg\") pod \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249446 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249483 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/18b68f5e-a1b4-4f52-9a4e-5967735ec105-lock\") pod \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249510 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eb04e24b-fc92-4f2e-abcb-fa46706f699a-lock\") pod \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249609 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqbp6\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-kube-api-access-pqbp6\") pod \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249637 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249681 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-lock\") pod \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249716 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/18b68f5e-a1b4-4f52-9a4e-5967735ec105-cache\") pod \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\" (UID: \"18b68f5e-a1b4-4f52-9a4e-5967735ec105\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249736 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift\") pod \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\" (UID: \"eb04e24b-fc92-4f2e-abcb-fa46706f699a\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249777 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-cache\") pod \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\" (UID: \"ea3117d7-0038-4ca5-bee5-ae76db9a12eb\") " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.249951 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb04e24b-fc92-4f2e-abcb-fa46706f699a-cache" (OuterVolumeSpecName: "cache") pod "eb04e24b-fc92-4f2e-abcb-fa46706f699a" (UID: "eb04e24b-fc92-4f2e-abcb-fa46706f699a"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.250124 4732 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eb04e24b-fc92-4f2e-abcb-fa46706f699a-cache\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.250441 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb04e24b-fc92-4f2e-abcb-fa46706f699a-lock" (OuterVolumeSpecName: "lock") pod "eb04e24b-fc92-4f2e-abcb-fa46706f699a" (UID: "eb04e24b-fc92-4f2e-abcb-fa46706f699a"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.250906 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18b68f5e-a1b4-4f52-9a4e-5967735ec105-cache" (OuterVolumeSpecName: "cache") pod "18b68f5e-a1b4-4f52-9a4e-5967735ec105" (UID: "18b68f5e-a1b4-4f52-9a4e-5967735ec105"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.250933 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-lock" (OuterVolumeSpecName: "lock") pod "ea3117d7-0038-4ca5-bee5-ae76db9a12eb" (UID: "ea3117d7-0038-4ca5-bee5-ae76db9a12eb"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.262937 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-cache" (OuterVolumeSpecName: "cache") pod "ea3117d7-0038-4ca5-bee5-ae76db9a12eb" (UID: "ea3117d7-0038-4ca5-bee5-ae76db9a12eb"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.263806 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18b68f5e-a1b4-4f52-9a4e-5967735ec105-lock" (OuterVolumeSpecName: "lock") pod "18b68f5e-a1b4-4f52-9a4e-5967735ec105" (UID: "18b68f5e-a1b4-4f52-9a4e-5967735ec105"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.271339 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "eb04e24b-fc92-4f2e-abcb-fa46706f699a" (UID: "eb04e24b-fc92-4f2e-abcb-fa46706f699a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.271364 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "swift") pod "18b68f5e-a1b4-4f52-9a4e-5967735ec105" (UID: "18b68f5e-a1b4-4f52-9a4e-5967735ec105"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.271509 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-kube-api-access-pqbp6" (OuterVolumeSpecName: "kube-api-access-pqbp6") pod "eb04e24b-fc92-4f2e-abcb-fa46706f699a" (UID: "eb04e24b-fc92-4f2e-abcb-fa46706f699a"). InnerVolumeSpecName "kube-api-access-pqbp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.271946 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "18b68f5e-a1b4-4f52-9a4e-5967735ec105" (UID: "18b68f5e-a1b4-4f52-9a4e-5967735ec105"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.271568 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "swift") pod "ea3117d7-0038-4ca5-bee5-ae76db9a12eb" (UID: "ea3117d7-0038-4ca5-bee5-ae76db9a12eb"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.271800 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-kube-api-access-txhcm" (OuterVolumeSpecName: "kube-api-access-txhcm") pod "18b68f5e-a1b4-4f52-9a4e-5967735ec105" (UID: "18b68f5e-a1b4-4f52-9a4e-5967735ec105"). InnerVolumeSpecName "kube-api-access-txhcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.272806 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-kube-api-access-9zdfg" (OuterVolumeSpecName: "kube-api-access-9zdfg") pod "ea3117d7-0038-4ca5-bee5-ae76db9a12eb" (UID: "ea3117d7-0038-4ca5-bee5-ae76db9a12eb"). InnerVolumeSpecName "kube-api-access-9zdfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.276987 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ea3117d7-0038-4ca5-bee5-ae76db9a12eb" (UID: "ea3117d7-0038-4ca5-bee5-ae76db9a12eb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.279307 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "eb04e24b-fc92-4f2e-abcb-fa46706f699a" (UID: "eb04e24b-fc92-4f2e-abcb-fa46706f699a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350725 4732 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/18b68f5e-a1b4-4f52-9a4e-5967735ec105-lock\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350758 4732 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eb04e24b-fc92-4f2e-abcb-fa46706f699a-lock\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350772 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqbp6\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-kube-api-access-pqbp6\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350810 4732 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350823 4732 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-lock\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350833 4732 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/18b68f5e-a1b4-4f52-9a4e-5967735ec105-cache\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350844 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb04e24b-fc92-4f2e-abcb-fa46706f699a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350854 4732 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-cache\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350875 4732 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350885 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350893 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txhcm\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-kube-api-access-txhcm\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350902 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/18b68f5e-a1b4-4f52-9a4e-5967735ec105-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350910 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zdfg\" (UniqueName: \"kubernetes.io/projected/ea3117d7-0038-4ca5-bee5-ae76db9a12eb-kube-api-access-9zdfg\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.350927 4732 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.362825 4732 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.366769 4732 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.375231 4732 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.418643 4732 generic.go:334] "Generic (PLEG): container finished" podID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerID="4591c4a657bf0f19e528acc7cb5586ad0a5dffe6072302f778c6b3daec4b890e" exitCode=137 Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.418819 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"4591c4a657bf0f19e528acc7cb5586ad0a5dffe6072302f778c6b3daec4b890e"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.418873 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"eb04e24b-fc92-4f2e-abcb-fa46706f699a","Type":"ContainerDied","Data":"484b2ea8fc5f3e2a8486b0977aa570fbe958192eb24867f074d00e930ab4b1a3"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.418868 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.418910 4732 scope.go:117] "RemoveContainer" containerID="4591c4a657bf0f19e528acc7cb5586ad0a5dffe6072302f778c6b3daec4b890e" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.427151 4732 generic.go:334] "Generic (PLEG): container finished" podID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerID="261d59a3c1b5662c1245d9e0a26d6fdf5a756126f782cf23384b915262209828" exitCode=137 Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.427209 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"261d59a3c1b5662c1245d9e0a26d6fdf5a756126f782cf23384b915262209828"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.427260 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"18b68f5e-a1b4-4f52-9a4e-5967735ec105","Type":"ContainerDied","Data":"9213fe58caadec775e92ed28d6bba2a8ef3c6d5b541e8e1e305d2da7b48f0916"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.427275 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"463912ed98b787656c483ddcf4b73e963647c6d5df261cfe3fad78758d6d1b9d"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.427287 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ce880a4ac662620fb475dd18995992d32b8be418497d54beba8a09c1335c1aa"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.427294 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54fada92647ba12febbb920093f9bc8e7464da78204240a5a00d06baf380ab69"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.427302 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd1347f500390657be8ce2ee2c537ab8800d5072a87c360ae87e57f1fb6a1882"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.427308 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"672f5feadfa214c6a0a943e614b3f6be2205caf0cfad789bc48eecd5126064c9"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.427315 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fa21222e4cd9c26bf7a8699721788dc5f3bcbad179ebfca9e1ff5dd4f7eca9f"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.427317 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.427321 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72926ae802ba11d13ab77c79539bfcc94151936d207f632232dd7d1f41349d6d"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.427795 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa17ab717c5754facbf3da31cb4afd3477236bd426c65c9122642f27b5886fb9"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.427811 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"caac1f6da96d02d4cab51830f080cd652323af90c1836570aef955dde0207cff"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439647 4732 generic.go:334] "Generic (PLEG): container finished" podID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerID="186ff48385c2286a698108d59672234436dfc7dbb5bac5e777070affa544b217" exitCode=137 Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439700 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"186ff48385c2286a698108d59672234436dfc7dbb5bac5e777070affa544b217"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439725 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"186ff48385c2286a698108d59672234436dfc7dbb5bac5e777070affa544b217"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439738 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7bb7853880d63c47b634d75defb148b4ed41cf77de6d39f02020df2ff5b03d3"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439745 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b50b39b84c6087e4699184efc06a407c325204a64a826bce844fec4c5164858e"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439752 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a8ebcfcb039bb39617fcee0e053ba202cf3bccf0891372d90f2647abbe5c1ea"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439758 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"420022f2e68e290b989b3165de84484bcc24a80cd43dfb94fa6e26433aff9a55"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439765 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd00da0aa47cdd0410c40f7add08c30b0950cfafc21a202b2619d006f368871b"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439772 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78f74f08eb824105edcd5f1f9501b3fa915219d39b763fb391f67d7c1c054292"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439779 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"655a856811f2b949bc2c94495ba00a853818330f4ec09b9aeffd1751288aff79"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439786 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"61d667d399369be98c22e8ccc9f3b4cb18a2a2b489b34f036b2fc88c0e71e429"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439794 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"039961fef86b21e927d5ebdf1e2eb67c58775d812f1269411f3f0411c895a43f"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439800 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ada6f8ad69317a784118495c96beb36d701ce7b36209a86b9a936d7cc110e3c2"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439806 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d252efa39cfce425e89f4670019c9177c3b0c93daee21298b31748b98d341f0"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439877 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e7852a660c0fcf3225ca272b8e2af7fc11735fb24d2d14460cde6438a7beb2d"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439832 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439898 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f37816afb654d1e776d0d4eec1d440d5af5c0af6bb7163bf1822d00c7129008"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439952 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d31d074c83b389c8db51d7fd48db465e4e9e513ed6084f0c730eaa69444cb6c5"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439969 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ea3117d7-0038-4ca5-bee5-ae76db9a12eb","Type":"ContainerDied","Data":"6568c920e3e41f0ca77451bc255f08043b793e9b23cf5a92a349e3e4e75234e1"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.439996 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"186ff48385c2286a698108d59672234436dfc7dbb5bac5e777070affa544b217"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440005 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7bb7853880d63c47b634d75defb148b4ed41cf77de6d39f02020df2ff5b03d3"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440012 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b50b39b84c6087e4699184efc06a407c325204a64a826bce844fec4c5164858e"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440019 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a8ebcfcb039bb39617fcee0e053ba202cf3bccf0891372d90f2647abbe5c1ea"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440025 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"420022f2e68e290b989b3165de84484bcc24a80cd43dfb94fa6e26433aff9a55"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440032 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd00da0aa47cdd0410c40f7add08c30b0950cfafc21a202b2619d006f368871b"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440037 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78f74f08eb824105edcd5f1f9501b3fa915219d39b763fb391f67d7c1c054292"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440044 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"655a856811f2b949bc2c94495ba00a853818330f4ec09b9aeffd1751288aff79"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440050 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"61d667d399369be98c22e8ccc9f3b4cb18a2a2b489b34f036b2fc88c0e71e429"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440057 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"039961fef86b21e927d5ebdf1e2eb67c58775d812f1269411f3f0411c895a43f"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440063 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ada6f8ad69317a784118495c96beb36d701ce7b36209a86b9a936d7cc110e3c2"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440070 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d252efa39cfce425e89f4670019c9177c3b0c93daee21298b31748b98d341f0"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440077 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e7852a660c0fcf3225ca272b8e2af7fc11735fb24d2d14460cde6438a7beb2d"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440082 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f37816afb654d1e776d0d4eec1d440d5af5c0af6bb7163bf1822d00c7129008"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.440087 4732 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d31d074c83b389c8db51d7fd48db465e4e9e513ed6084f0c730eaa69444cb6c5"} Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.443762 4732 scope.go:117] "RemoveContainer" containerID="9bd8439c812d4db4031eaf201411d9f03f68a7e4fb620cd284c1316ed57ea8e9" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.453417 4732 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.453448 4732 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.453459 4732 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.459365 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.466083 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.467840 4732 scope.go:117] "RemoveContainer" containerID="f7740f3734b2a6d8e15826537feeb63febadea03ccc4bc0d6f70700f6a75626f" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.489282 4732 scope.go:117] "RemoveContainer" containerID="22026d373e04df27564ec7eb67c62bdb36d5f926aefc6470f05958ca3bfafb6d" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.494717 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.505221 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.510768 4732 scope.go:117] "RemoveContainer" containerID="36c3a43f98d1be3fa4067ddd849e1f8adb09f6830fb90e1e103b4fee901b5ddb" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.511554 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.517254 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.528949 4732 scope.go:117] "RemoveContainer" containerID="3bedc7b7578ebe9948019c2fe597f44f31339a72681e217b76908ee8888b902f" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.547256 4732 scope.go:117] "RemoveContainer" containerID="61eb8b513a113fbabe0d9a8fc9c197f8afac97222ddfaaa712ed87f070443521" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.560868 4732 scope.go:117] "RemoveContainer" containerID="c9a37bdf8b63464e6bd6756605e3b1a2408e5416d00cda661d4d274b38bd7be7" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.577784 4732 scope.go:117] "RemoveContainer" containerID="bde9eaf63bb5ab51e11bacddaba56989bae7d6031fa19e2c1b77f92866dadea8" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.592312 4732 scope.go:117] "RemoveContainer" containerID="c9736add9e20e2e6d821daa6edc69baaef51fa7d36762e678ee4128cf4df9dbd" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.607705 4732 scope.go:117] "RemoveContainer" containerID="498871caf245c1aa003224c568a9512c564fb67d15571cad0c2fa636e92058ab" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.621350 4732 scope.go:117] "RemoveContainer" containerID="caafb470bfca795c3c7f1107e8a9e97700d33c459a35140d551bfc62e596d5a6" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.639400 4732 scope.go:117] "RemoveContainer" containerID="647e5cb59a1fa496ac972ece6ee175c218e028f20f65b8bd02f6b5c3d914e53c" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.654007 4732 scope.go:117] "RemoveContainer" containerID="1a43f64c141cb300a8ec9eb5a477bb5c1933446a32abc193cb5173f92ba21178" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.666373 4732 scope.go:117] "RemoveContainer" containerID="a00d79844fd0811e549149a48676d54976a695ed7a1497b1125e383f358fdc3c" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.680152 4732 scope.go:117] "RemoveContainer" containerID="4591c4a657bf0f19e528acc7cb5586ad0a5dffe6072302f778c6b3daec4b890e" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.680468 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4591c4a657bf0f19e528acc7cb5586ad0a5dffe6072302f778c6b3daec4b890e\": container with ID starting with 4591c4a657bf0f19e528acc7cb5586ad0a5dffe6072302f778c6b3daec4b890e not found: ID does not exist" containerID="4591c4a657bf0f19e528acc7cb5586ad0a5dffe6072302f778c6b3daec4b890e" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.680502 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4591c4a657bf0f19e528acc7cb5586ad0a5dffe6072302f778c6b3daec4b890e"} err="failed to get container status \"4591c4a657bf0f19e528acc7cb5586ad0a5dffe6072302f778c6b3daec4b890e\": rpc error: code = NotFound desc = could not find container \"4591c4a657bf0f19e528acc7cb5586ad0a5dffe6072302f778c6b3daec4b890e\": container with ID starting with 4591c4a657bf0f19e528acc7cb5586ad0a5dffe6072302f778c6b3daec4b890e not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.680526 4732 scope.go:117] "RemoveContainer" containerID="9bd8439c812d4db4031eaf201411d9f03f68a7e4fb620cd284c1316ed57ea8e9" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.680916 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bd8439c812d4db4031eaf201411d9f03f68a7e4fb620cd284c1316ed57ea8e9\": container with ID starting with 9bd8439c812d4db4031eaf201411d9f03f68a7e4fb620cd284c1316ed57ea8e9 not found: ID does not exist" containerID="9bd8439c812d4db4031eaf201411d9f03f68a7e4fb620cd284c1316ed57ea8e9" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.680946 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd8439c812d4db4031eaf201411d9f03f68a7e4fb620cd284c1316ed57ea8e9"} err="failed to get container status \"9bd8439c812d4db4031eaf201411d9f03f68a7e4fb620cd284c1316ed57ea8e9\": rpc error: code = NotFound desc = could not find container \"9bd8439c812d4db4031eaf201411d9f03f68a7e4fb620cd284c1316ed57ea8e9\": container with ID starting with 9bd8439c812d4db4031eaf201411d9f03f68a7e4fb620cd284c1316ed57ea8e9 not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.680965 4732 scope.go:117] "RemoveContainer" containerID="f7740f3734b2a6d8e15826537feeb63febadea03ccc4bc0d6f70700f6a75626f" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.681153 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7740f3734b2a6d8e15826537feeb63febadea03ccc4bc0d6f70700f6a75626f\": container with ID starting with f7740f3734b2a6d8e15826537feeb63febadea03ccc4bc0d6f70700f6a75626f not found: ID does not exist" containerID="f7740f3734b2a6d8e15826537feeb63febadea03ccc4bc0d6f70700f6a75626f" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.681187 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7740f3734b2a6d8e15826537feeb63febadea03ccc4bc0d6f70700f6a75626f"} err="failed to get container status \"f7740f3734b2a6d8e15826537feeb63febadea03ccc4bc0d6f70700f6a75626f\": rpc error: code = NotFound desc = could not find container \"f7740f3734b2a6d8e15826537feeb63febadea03ccc4bc0d6f70700f6a75626f\": container with ID starting with f7740f3734b2a6d8e15826537feeb63febadea03ccc4bc0d6f70700f6a75626f not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.681240 4732 scope.go:117] "RemoveContainer" containerID="22026d373e04df27564ec7eb67c62bdb36d5f926aefc6470f05958ca3bfafb6d" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.681554 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22026d373e04df27564ec7eb67c62bdb36d5f926aefc6470f05958ca3bfafb6d\": container with ID starting with 22026d373e04df27564ec7eb67c62bdb36d5f926aefc6470f05958ca3bfafb6d not found: ID does not exist" containerID="22026d373e04df27564ec7eb67c62bdb36d5f926aefc6470f05958ca3bfafb6d" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.681641 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22026d373e04df27564ec7eb67c62bdb36d5f926aefc6470f05958ca3bfafb6d"} err="failed to get container status \"22026d373e04df27564ec7eb67c62bdb36d5f926aefc6470f05958ca3bfafb6d\": rpc error: code = NotFound desc = could not find container \"22026d373e04df27564ec7eb67c62bdb36d5f926aefc6470f05958ca3bfafb6d\": container with ID starting with 22026d373e04df27564ec7eb67c62bdb36d5f926aefc6470f05958ca3bfafb6d not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.681728 4732 scope.go:117] "RemoveContainer" containerID="36c3a43f98d1be3fa4067ddd849e1f8adb09f6830fb90e1e103b4fee901b5ddb" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.681993 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36c3a43f98d1be3fa4067ddd849e1f8adb09f6830fb90e1e103b4fee901b5ddb\": container with ID starting with 36c3a43f98d1be3fa4067ddd849e1f8adb09f6830fb90e1e103b4fee901b5ddb not found: ID does not exist" containerID="36c3a43f98d1be3fa4067ddd849e1f8adb09f6830fb90e1e103b4fee901b5ddb" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.682020 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c3a43f98d1be3fa4067ddd849e1f8adb09f6830fb90e1e103b4fee901b5ddb"} err="failed to get container status \"36c3a43f98d1be3fa4067ddd849e1f8adb09f6830fb90e1e103b4fee901b5ddb\": rpc error: code = NotFound desc = could not find container \"36c3a43f98d1be3fa4067ddd849e1f8adb09f6830fb90e1e103b4fee901b5ddb\": container with ID starting with 36c3a43f98d1be3fa4067ddd849e1f8adb09f6830fb90e1e103b4fee901b5ddb not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.682037 4732 scope.go:117] "RemoveContainer" containerID="3bedc7b7578ebe9948019c2fe597f44f31339a72681e217b76908ee8888b902f" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.682230 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bedc7b7578ebe9948019c2fe597f44f31339a72681e217b76908ee8888b902f\": container with ID starting with 3bedc7b7578ebe9948019c2fe597f44f31339a72681e217b76908ee8888b902f not found: ID does not exist" containerID="3bedc7b7578ebe9948019c2fe597f44f31339a72681e217b76908ee8888b902f" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.682258 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bedc7b7578ebe9948019c2fe597f44f31339a72681e217b76908ee8888b902f"} err="failed to get container status \"3bedc7b7578ebe9948019c2fe597f44f31339a72681e217b76908ee8888b902f\": rpc error: code = NotFound desc = could not find container \"3bedc7b7578ebe9948019c2fe597f44f31339a72681e217b76908ee8888b902f\": container with ID starting with 3bedc7b7578ebe9948019c2fe597f44f31339a72681e217b76908ee8888b902f not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.682274 4732 scope.go:117] "RemoveContainer" containerID="61eb8b513a113fbabe0d9a8fc9c197f8afac97222ddfaaa712ed87f070443521" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.682480 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61eb8b513a113fbabe0d9a8fc9c197f8afac97222ddfaaa712ed87f070443521\": container with ID starting with 61eb8b513a113fbabe0d9a8fc9c197f8afac97222ddfaaa712ed87f070443521 not found: ID does not exist" containerID="61eb8b513a113fbabe0d9a8fc9c197f8afac97222ddfaaa712ed87f070443521" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.682507 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61eb8b513a113fbabe0d9a8fc9c197f8afac97222ddfaaa712ed87f070443521"} err="failed to get container status \"61eb8b513a113fbabe0d9a8fc9c197f8afac97222ddfaaa712ed87f070443521\": rpc error: code = NotFound desc = could not find container \"61eb8b513a113fbabe0d9a8fc9c197f8afac97222ddfaaa712ed87f070443521\": container with ID starting with 61eb8b513a113fbabe0d9a8fc9c197f8afac97222ddfaaa712ed87f070443521 not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.682525 4732 scope.go:117] "RemoveContainer" containerID="c9a37bdf8b63464e6bd6756605e3b1a2408e5416d00cda661d4d274b38bd7be7" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.682855 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9a37bdf8b63464e6bd6756605e3b1a2408e5416d00cda661d4d274b38bd7be7\": container with ID starting with c9a37bdf8b63464e6bd6756605e3b1a2408e5416d00cda661d4d274b38bd7be7 not found: ID does not exist" containerID="c9a37bdf8b63464e6bd6756605e3b1a2408e5416d00cda661d4d274b38bd7be7" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.682895 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9a37bdf8b63464e6bd6756605e3b1a2408e5416d00cda661d4d274b38bd7be7"} err="failed to get container status \"c9a37bdf8b63464e6bd6756605e3b1a2408e5416d00cda661d4d274b38bd7be7\": rpc error: code = NotFound desc = could not find container \"c9a37bdf8b63464e6bd6756605e3b1a2408e5416d00cda661d4d274b38bd7be7\": container with ID starting with c9a37bdf8b63464e6bd6756605e3b1a2408e5416d00cda661d4d274b38bd7be7 not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.682952 4732 scope.go:117] "RemoveContainer" containerID="bde9eaf63bb5ab51e11bacddaba56989bae7d6031fa19e2c1b77f92866dadea8" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.683234 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde9eaf63bb5ab51e11bacddaba56989bae7d6031fa19e2c1b77f92866dadea8\": container with ID starting with bde9eaf63bb5ab51e11bacddaba56989bae7d6031fa19e2c1b77f92866dadea8 not found: ID does not exist" containerID="bde9eaf63bb5ab51e11bacddaba56989bae7d6031fa19e2c1b77f92866dadea8" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.683262 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde9eaf63bb5ab51e11bacddaba56989bae7d6031fa19e2c1b77f92866dadea8"} err="failed to get container status \"bde9eaf63bb5ab51e11bacddaba56989bae7d6031fa19e2c1b77f92866dadea8\": rpc error: code = NotFound desc = could not find container \"bde9eaf63bb5ab51e11bacddaba56989bae7d6031fa19e2c1b77f92866dadea8\": container with ID starting with bde9eaf63bb5ab51e11bacddaba56989bae7d6031fa19e2c1b77f92866dadea8 not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.683281 4732 scope.go:117] "RemoveContainer" containerID="c9736add9e20e2e6d821daa6edc69baaef51fa7d36762e678ee4128cf4df9dbd" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.683474 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9736add9e20e2e6d821daa6edc69baaef51fa7d36762e678ee4128cf4df9dbd\": container with ID starting with c9736add9e20e2e6d821daa6edc69baaef51fa7d36762e678ee4128cf4df9dbd not found: ID does not exist" containerID="c9736add9e20e2e6d821daa6edc69baaef51fa7d36762e678ee4128cf4df9dbd" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.683504 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9736add9e20e2e6d821daa6edc69baaef51fa7d36762e678ee4128cf4df9dbd"} err="failed to get container status \"c9736add9e20e2e6d821daa6edc69baaef51fa7d36762e678ee4128cf4df9dbd\": rpc error: code = NotFound desc = could not find container \"c9736add9e20e2e6d821daa6edc69baaef51fa7d36762e678ee4128cf4df9dbd\": container with ID starting with c9736add9e20e2e6d821daa6edc69baaef51fa7d36762e678ee4128cf4df9dbd not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.683523 4732 scope.go:117] "RemoveContainer" containerID="498871caf245c1aa003224c568a9512c564fb67d15571cad0c2fa636e92058ab" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.683772 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498871caf245c1aa003224c568a9512c564fb67d15571cad0c2fa636e92058ab\": container with ID starting with 498871caf245c1aa003224c568a9512c564fb67d15571cad0c2fa636e92058ab not found: ID does not exist" containerID="498871caf245c1aa003224c568a9512c564fb67d15571cad0c2fa636e92058ab" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.683847 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498871caf245c1aa003224c568a9512c564fb67d15571cad0c2fa636e92058ab"} err="failed to get container status \"498871caf245c1aa003224c568a9512c564fb67d15571cad0c2fa636e92058ab\": rpc error: code = NotFound desc = could not find container \"498871caf245c1aa003224c568a9512c564fb67d15571cad0c2fa636e92058ab\": container with ID starting with 498871caf245c1aa003224c568a9512c564fb67d15571cad0c2fa636e92058ab not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.683904 4732 scope.go:117] "RemoveContainer" containerID="caafb470bfca795c3c7f1107e8a9e97700d33c459a35140d551bfc62e596d5a6" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.684155 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caafb470bfca795c3c7f1107e8a9e97700d33c459a35140d551bfc62e596d5a6\": container with ID starting with caafb470bfca795c3c7f1107e8a9e97700d33c459a35140d551bfc62e596d5a6 not found: ID does not exist" containerID="caafb470bfca795c3c7f1107e8a9e97700d33c459a35140d551bfc62e596d5a6" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.684183 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caafb470bfca795c3c7f1107e8a9e97700d33c459a35140d551bfc62e596d5a6"} err="failed to get container status \"caafb470bfca795c3c7f1107e8a9e97700d33c459a35140d551bfc62e596d5a6\": rpc error: code = NotFound desc = could not find container \"caafb470bfca795c3c7f1107e8a9e97700d33c459a35140d551bfc62e596d5a6\": container with ID starting with caafb470bfca795c3c7f1107e8a9e97700d33c459a35140d551bfc62e596d5a6 not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.684199 4732 scope.go:117] "RemoveContainer" containerID="647e5cb59a1fa496ac972ece6ee175c218e028f20f65b8bd02f6b5c3d914e53c" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.684474 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647e5cb59a1fa496ac972ece6ee175c218e028f20f65b8bd02f6b5c3d914e53c\": container with ID starting with 647e5cb59a1fa496ac972ece6ee175c218e028f20f65b8bd02f6b5c3d914e53c not found: ID does not exist" containerID="647e5cb59a1fa496ac972ece6ee175c218e028f20f65b8bd02f6b5c3d914e53c" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.684512 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647e5cb59a1fa496ac972ece6ee175c218e028f20f65b8bd02f6b5c3d914e53c"} err="failed to get container status \"647e5cb59a1fa496ac972ece6ee175c218e028f20f65b8bd02f6b5c3d914e53c\": rpc error: code = NotFound desc = could not find container \"647e5cb59a1fa496ac972ece6ee175c218e028f20f65b8bd02f6b5c3d914e53c\": container with ID starting with 647e5cb59a1fa496ac972ece6ee175c218e028f20f65b8bd02f6b5c3d914e53c not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.684586 4732 scope.go:117] "RemoveContainer" containerID="1a43f64c141cb300a8ec9eb5a477bb5c1933446a32abc193cb5173f92ba21178" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.685922 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a43f64c141cb300a8ec9eb5a477bb5c1933446a32abc193cb5173f92ba21178\": container with ID starting with 1a43f64c141cb300a8ec9eb5a477bb5c1933446a32abc193cb5173f92ba21178 not found: ID does not exist" containerID="1a43f64c141cb300a8ec9eb5a477bb5c1933446a32abc193cb5173f92ba21178" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.685950 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a43f64c141cb300a8ec9eb5a477bb5c1933446a32abc193cb5173f92ba21178"} err="failed to get container status \"1a43f64c141cb300a8ec9eb5a477bb5c1933446a32abc193cb5173f92ba21178\": rpc error: code = NotFound desc = could not find container \"1a43f64c141cb300a8ec9eb5a477bb5c1933446a32abc193cb5173f92ba21178\": container with ID starting with 1a43f64c141cb300a8ec9eb5a477bb5c1933446a32abc193cb5173f92ba21178 not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.685971 4732 scope.go:117] "RemoveContainer" containerID="a00d79844fd0811e549149a48676d54976a695ed7a1497b1125e383f358fdc3c" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.686161 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a00d79844fd0811e549149a48676d54976a695ed7a1497b1125e383f358fdc3c\": container with ID starting with a00d79844fd0811e549149a48676d54976a695ed7a1497b1125e383f358fdc3c not found: ID does not exist" containerID="a00d79844fd0811e549149a48676d54976a695ed7a1497b1125e383f358fdc3c" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.686185 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a00d79844fd0811e549149a48676d54976a695ed7a1497b1125e383f358fdc3c"} err="failed to get container status \"a00d79844fd0811e549149a48676d54976a695ed7a1497b1125e383f358fdc3c\": rpc error: code = NotFound desc = could not find container \"a00d79844fd0811e549149a48676d54976a695ed7a1497b1125e383f358fdc3c\": container with ID starting with a00d79844fd0811e549149a48676d54976a695ed7a1497b1125e383f358fdc3c not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.686200 4732 scope.go:117] "RemoveContainer" containerID="261d59a3c1b5662c1245d9e0a26d6fdf5a756126f782cf23384b915262209828" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.704490 4732 scope.go:117] "RemoveContainer" containerID="01d34a0e2eb5b105a83439a771538234555a443b0c13ee4cb087d83ae6e5a172" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.720123 4732 scope.go:117] "RemoveContainer" containerID="caaaa31e0459d9ecc2e8474a50fcbdb26784a11af19cecd7546e523bc30b1bb1" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.737404 4732 scope.go:117] "RemoveContainer" containerID="3c487459ef32558f2157dbf1c508505bc4dd091b7590c38ba37b19a62a7f80b1" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.772353 4732 scope.go:117] "RemoveContainer" containerID="a8e090887d88b45d44e7f952ad72621d08fce059d8e6698d9ad0a38d3082acc9" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.785539 4732 scope.go:117] "RemoveContainer" containerID="3b885dbb98c02c11a228e346c436a702bd2867b63763347fde6a9f9094c1d016" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.802030 4732 scope.go:117] "RemoveContainer" containerID="463912ed98b787656c483ddcf4b73e963647c6d5df261cfe3fad78758d6d1b9d" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.820316 4732 scope.go:117] "RemoveContainer" containerID="7ce880a4ac662620fb475dd18995992d32b8be418497d54beba8a09c1335c1aa" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.833523 4732 scope.go:117] "RemoveContainer" containerID="54fada92647ba12febbb920093f9bc8e7464da78204240a5a00d06baf380ab69" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.847543 4732 scope.go:117] "RemoveContainer" containerID="fd1347f500390657be8ce2ee2c537ab8800d5072a87c360ae87e57f1fb6a1882" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.862161 4732 scope.go:117] "RemoveContainer" containerID="672f5feadfa214c6a0a943e614b3f6be2205caf0cfad789bc48eecd5126064c9" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.881779 4732 scope.go:117] "RemoveContainer" containerID="8fa21222e4cd9c26bf7a8699721788dc5f3bcbad179ebfca9e1ff5dd4f7eca9f" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.903437 4732 scope.go:117] "RemoveContainer" containerID="72926ae802ba11d13ab77c79539bfcc94151936d207f632232dd7d1f41349d6d" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.926709 4732 scope.go:117] "RemoveContainer" containerID="fa17ab717c5754facbf3da31cb4afd3477236bd426c65c9122642f27b5886fb9" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.944872 4732 scope.go:117] "RemoveContainer" containerID="caac1f6da96d02d4cab51830f080cd652323af90c1836570aef955dde0207cff" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.960033 4732 scope.go:117] "RemoveContainer" containerID="261d59a3c1b5662c1245d9e0a26d6fdf5a756126f782cf23384b915262209828" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.960385 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"261d59a3c1b5662c1245d9e0a26d6fdf5a756126f782cf23384b915262209828\": container with ID starting with 261d59a3c1b5662c1245d9e0a26d6fdf5a756126f782cf23384b915262209828 not found: ID does not exist" containerID="261d59a3c1b5662c1245d9e0a26d6fdf5a756126f782cf23384b915262209828" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.960420 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"261d59a3c1b5662c1245d9e0a26d6fdf5a756126f782cf23384b915262209828"} err="failed to get container status \"261d59a3c1b5662c1245d9e0a26d6fdf5a756126f782cf23384b915262209828\": rpc error: code = NotFound desc = could not find container \"261d59a3c1b5662c1245d9e0a26d6fdf5a756126f782cf23384b915262209828\": container with ID starting with 261d59a3c1b5662c1245d9e0a26d6fdf5a756126f782cf23384b915262209828 not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.960443 4732 scope.go:117] "RemoveContainer" containerID="01d34a0e2eb5b105a83439a771538234555a443b0c13ee4cb087d83ae6e5a172" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.960851 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d34a0e2eb5b105a83439a771538234555a443b0c13ee4cb087d83ae6e5a172\": container with ID starting with 01d34a0e2eb5b105a83439a771538234555a443b0c13ee4cb087d83ae6e5a172 not found: ID does not exist" containerID="01d34a0e2eb5b105a83439a771538234555a443b0c13ee4cb087d83ae6e5a172" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.960875 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d34a0e2eb5b105a83439a771538234555a443b0c13ee4cb087d83ae6e5a172"} err="failed to get container status \"01d34a0e2eb5b105a83439a771538234555a443b0c13ee4cb087d83ae6e5a172\": rpc error: code = NotFound desc = could not find container \"01d34a0e2eb5b105a83439a771538234555a443b0c13ee4cb087d83ae6e5a172\": container with ID starting with 01d34a0e2eb5b105a83439a771538234555a443b0c13ee4cb087d83ae6e5a172 not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.960890 4732 scope.go:117] "RemoveContainer" containerID="caaaa31e0459d9ecc2e8474a50fcbdb26784a11af19cecd7546e523bc30b1bb1" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.961151 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caaaa31e0459d9ecc2e8474a50fcbdb26784a11af19cecd7546e523bc30b1bb1\": container with ID starting with caaaa31e0459d9ecc2e8474a50fcbdb26784a11af19cecd7546e523bc30b1bb1 not found: ID does not exist" containerID="caaaa31e0459d9ecc2e8474a50fcbdb26784a11af19cecd7546e523bc30b1bb1" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.961173 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caaaa31e0459d9ecc2e8474a50fcbdb26784a11af19cecd7546e523bc30b1bb1"} err="failed to get container status \"caaaa31e0459d9ecc2e8474a50fcbdb26784a11af19cecd7546e523bc30b1bb1\": rpc error: code = NotFound desc = could not find container \"caaaa31e0459d9ecc2e8474a50fcbdb26784a11af19cecd7546e523bc30b1bb1\": container with ID starting with caaaa31e0459d9ecc2e8474a50fcbdb26784a11af19cecd7546e523bc30b1bb1 not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.961188 4732 scope.go:117] "RemoveContainer" containerID="3c487459ef32558f2157dbf1c508505bc4dd091b7590c38ba37b19a62a7f80b1" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.961575 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c487459ef32558f2157dbf1c508505bc4dd091b7590c38ba37b19a62a7f80b1\": container with ID starting with 3c487459ef32558f2157dbf1c508505bc4dd091b7590c38ba37b19a62a7f80b1 not found: ID does not exist" containerID="3c487459ef32558f2157dbf1c508505bc4dd091b7590c38ba37b19a62a7f80b1" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.961641 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c487459ef32558f2157dbf1c508505bc4dd091b7590c38ba37b19a62a7f80b1"} err="failed to get container status \"3c487459ef32558f2157dbf1c508505bc4dd091b7590c38ba37b19a62a7f80b1\": rpc error: code = NotFound desc = could not find container \"3c487459ef32558f2157dbf1c508505bc4dd091b7590c38ba37b19a62a7f80b1\": container with ID starting with 3c487459ef32558f2157dbf1c508505bc4dd091b7590c38ba37b19a62a7f80b1 not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.961706 4732 scope.go:117] "RemoveContainer" containerID="a8e090887d88b45d44e7f952ad72621d08fce059d8e6698d9ad0a38d3082acc9" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.961977 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e090887d88b45d44e7f952ad72621d08fce059d8e6698d9ad0a38d3082acc9\": container with ID starting with a8e090887d88b45d44e7f952ad72621d08fce059d8e6698d9ad0a38d3082acc9 not found: ID does not exist" containerID="a8e090887d88b45d44e7f952ad72621d08fce059d8e6698d9ad0a38d3082acc9" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.961999 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e090887d88b45d44e7f952ad72621d08fce059d8e6698d9ad0a38d3082acc9"} err="failed to get container status \"a8e090887d88b45d44e7f952ad72621d08fce059d8e6698d9ad0a38d3082acc9\": rpc error: code = NotFound desc = could not find container \"a8e090887d88b45d44e7f952ad72621d08fce059d8e6698d9ad0a38d3082acc9\": container with ID starting with a8e090887d88b45d44e7f952ad72621d08fce059d8e6698d9ad0a38d3082acc9 not found: ID does not exist" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.962014 4732 scope.go:117] "RemoveContainer" containerID="3b885dbb98c02c11a228e346c436a702bd2867b63763347fde6a9f9094c1d016" Jan 31 09:21:25 crc kubenswrapper[4732]: E0131 09:21:25.962469 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b885dbb98c02c11a228e346c436a702bd2867b63763347fde6a9f9094c1d016\": container with ID starting with 3b885dbb98c02c11a228e346c436a702bd2867b63763347fde6a9f9094c1d016 not found: ID does not exist" containerID="3b885dbb98c02c11a228e346c436a702bd2867b63763347fde6a9f9094c1d016" Jan 31 09:21:25 crc kubenswrapper[4732]: I0131 09:21:25.962493 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b885dbb98c02c11a228e346c436a702bd2867b63763347fde6a9f9094c1d016"} err="failed to get container status \"3b885dbb98c02c11a228e346c436a702bd2867b63763347fde6a9f9094c1d016\": rpc error: code = NotFound desc = could not find container \"3b885dbb98c02c11a228e346c436a702bd2867b63763347fde6a9f9094c1d016\": container with ID starting with 3b885dbb98c02c11a228e346c436a702bd2867b63763347fde6a9f9094c1d016 not found: ID does not exist" Jan 31 09:21:26 crc kubenswrapper[4732]: I0131 09:21:26.551064 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" path="/var/lib/kubelet/pods/18b68f5e-a1b4-4f52-9a4e-5967735ec105/volumes" Jan 31 09:21:26 crc kubenswrapper[4732]: I0131 09:21:26.553297 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" path="/var/lib/kubelet/pods/ea3117d7-0038-4ca5-bee5-ae76db9a12eb/volumes" Jan 31 09:21:26 crc kubenswrapper[4732]: I0131 09:21:26.555107 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" path="/var/lib/kubelet/pods/eb04e24b-fc92-4f2e-abcb-fa46706f699a/volumes" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.128304 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.128856 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.128873 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.128885 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.128892 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.128909 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.128915 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.128927 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-reaper" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.128935 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-reaper" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.128948 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.128955 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.128964 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-expirer" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.128972 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-expirer" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.128985 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.128992 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129001 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129007 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-server" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129018 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129025 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129035 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129041 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129053 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129060 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129071 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="swift-recon-cron" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129078 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="swift-recon-cron" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129089 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-reaper" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129095 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-reaper" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129102 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129108 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-server" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129114 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="rsync" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129120 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="rsync" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129128 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb5e63b-882d-4388-abb1-130923832c9f" containerName="proxy-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129134 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb5e63b-882d-4388-abb1-130923832c9f" containerName="proxy-server" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129142 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-reaper" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129148 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-reaper" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129158 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129164 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129172 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb5e63b-882d-4388-abb1-130923832c9f" containerName="proxy-httpd" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129178 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb5e63b-882d-4388-abb1-130923832c9f" containerName="proxy-httpd" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129185 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129190 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129200 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129206 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129216 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129222 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129228 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa92affd-0106-4b01-b96c-8f2b0459ee3a" containerName="swift-ring-rebalance" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129233 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa92affd-0106-4b01-b96c-8f2b0459ee3a" containerName="swift-ring-rebalance" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129241 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129246 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129255 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129260 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129268 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129274 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-server" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129282 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129287 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129295 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="rsync" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129300 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="rsync" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129309 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129314 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129325 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129331 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129340 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129346 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-server" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129353 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129359 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129368 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129376 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129388 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129399 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-server" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129408 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129413 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129421 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129426 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-server" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129435 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129453 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129459 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129465 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129474 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129480 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129487 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129493 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129506 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129517 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-server" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129527 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129534 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-server" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129544 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="swift-recon-cron" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129552 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="swift-recon-cron" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129566 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="rsync" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129574 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="rsync" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129582 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="swift-recon-cron" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129587 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="swift-recon-cron" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129598 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-expirer" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129603 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-expirer" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129613 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129618 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-server" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.129626 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-expirer" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129631 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-expirer" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129815 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129824 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129829 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129837 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="rsync" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129843 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129850 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129857 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129865 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129873 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129879 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="swift-recon-cron" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129892 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129898 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129907 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb5e63b-882d-4388-abb1-130923832c9f" containerName="proxy-httpd" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129915 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb5e63b-882d-4388-abb1-130923832c9f" containerName="proxy-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129924 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129933 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="rsync" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129941 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129946 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129952 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129957 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129967 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129975 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa92affd-0106-4b01-b96c-8f2b0459ee3a" containerName="swift-ring-rebalance" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129982 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129987 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.129995 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-expirer" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130004 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130011 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="container-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130020 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-reaper" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130028 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130034 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="container-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130042 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-expirer" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130051 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130056 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="rsync" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130063 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130071 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="swift-recon-cron" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130080 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-reaper" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130085 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-updater" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130091 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="account-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130101 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130108 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-reaper" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130116 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="object-server" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130124 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3117d7-0038-4ca5-bee5-ae76db9a12eb" containerName="container-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130130 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="swift-recon-cron" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130136 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="account-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130142 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-replicator" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130150 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="object-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130159 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb04e24b-fc92-4f2e-abcb-fa46706f699a" containerName="account-auditor" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.130166 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b68f5e-a1b4-4f52-9a4e-5967735ec105" containerName="object-expirer" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.138791 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.144356 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.146126 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-z8hn5" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.146404 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.146477 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.154029 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w"] Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.157344 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.159648 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.173143 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w"] Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.178567 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.297377 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d5994e0-d411-4d47-bcbb-1a12020906ce-run-httpd\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.297424 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5994e0-d411-4d47-bcbb-1a12020906ce-config-data\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.297444 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-cache\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.297462 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qccj\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-kube-api-access-8qccj\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.297617 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.297694 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-lock\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.297763 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njt7j\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-kube-api-access-njt7j\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.297790 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.297826 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.297840 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d5994e0-d411-4d47-bcbb-1a12020906ce-log-httpd\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.399347 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.399410 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-lock\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.399447 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njt7j\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-kube-api-access-njt7j\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.399465 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.399487 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.399502 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d5994e0-d411-4d47-bcbb-1a12020906ce-log-httpd\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.399534 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d5994e0-d411-4d47-bcbb-1a12020906ce-run-httpd\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.399557 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5994e0-d411-4d47-bcbb-1a12020906ce-config-data\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.399571 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-cache\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.399588 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qccj\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-kube-api-access-8qccj\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.399936 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.399948 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w: configmap "swift-ring-files" not found Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.399983 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift podName:4d5994e0-d411-4d47-bcbb-1a12020906ce nodeName:}" failed. No retries permitted until 2026-01-31 09:21:28.899967431 +0000 UTC m=+1227.205843635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift") pod "swift-proxy-77c98d654c-ftt6w" (UID: "4d5994e0-d411-4d47-bcbb-1a12020906ce") : configmap "swift-ring-files" not found Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.400393 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.400434 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.400503 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift podName:2d99525a-cb49-44dd-82c0-0bf1641ec2b5 nodeName:}" failed. No retries permitted until 2026-01-31 09:21:28.900481797 +0000 UTC m=+1227.206358071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift") pod "swift-storage-0" (UID: "2d99525a-cb49-44dd-82c0-0bf1641ec2b5") : configmap "swift-ring-files" not found Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.400726 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-cache\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.400764 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-lock\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.400798 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d5994e0-d411-4d47-bcbb-1a12020906ce-run-httpd\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.400931 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d5994e0-d411-4d47-bcbb-1a12020906ce-log-httpd\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.400956 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") device mount path \"/mnt/openstack/pv06\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.420446 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njt7j\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-kube-api-access-njt7j\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.422604 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5994e0-d411-4d47-bcbb-1a12020906ce-config-data\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.422766 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.431800 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qccj\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-kube-api-access-8qccj\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.906233 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:28 crc kubenswrapper[4732]: I0131 09:21:28.906335 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.906433 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.906473 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w: configmap "swift-ring-files" not found Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.906514 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.906536 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.906548 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift podName:4d5994e0-d411-4d47-bcbb-1a12020906ce nodeName:}" failed. No retries permitted until 2026-01-31 09:21:29.906521788 +0000 UTC m=+1228.212398032 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift") pod "swift-proxy-77c98d654c-ftt6w" (UID: "4d5994e0-d411-4d47-bcbb-1a12020906ce") : configmap "swift-ring-files" not found Jan 31 09:21:28 crc kubenswrapper[4732]: E0131 09:21:28.906597 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift podName:2d99525a-cb49-44dd-82c0-0bf1641ec2b5 nodeName:}" failed. No retries permitted until 2026-01-31 09:21:29.906579279 +0000 UTC m=+1228.212455493 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift") pod "swift-storage-0" (UID: "2d99525a-cb49-44dd-82c0-0bf1641ec2b5") : configmap "swift-ring-files" not found Jan 31 09:21:29 crc kubenswrapper[4732]: I0131 09:21:29.922023 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:29 crc kubenswrapper[4732]: I0131 09:21:29.923029 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:29 crc kubenswrapper[4732]: E0131 09:21:29.922234 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:21:29 crc kubenswrapper[4732]: E0131 09:21:29.923292 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w: configmap "swift-ring-files" not found Jan 31 09:21:29 crc kubenswrapper[4732]: E0131 09:21:29.923340 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift podName:4d5994e0-d411-4d47-bcbb-1a12020906ce nodeName:}" failed. No retries permitted until 2026-01-31 09:21:31.923325575 +0000 UTC m=+1230.229201779 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift") pod "swift-proxy-77c98d654c-ftt6w" (UID: "4d5994e0-d411-4d47-bcbb-1a12020906ce") : configmap "swift-ring-files" not found Jan 31 09:21:29 crc kubenswrapper[4732]: E0131 09:21:29.923278 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:21:29 crc kubenswrapper[4732]: E0131 09:21:29.923648 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:21:29 crc kubenswrapper[4732]: E0131 09:21:29.923704 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift podName:2d99525a-cb49-44dd-82c0-0bf1641ec2b5 nodeName:}" failed. No retries permitted until 2026-01-31 09:21:31.923693557 +0000 UTC m=+1230.229569761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift") pod "swift-storage-0" (UID: "2d99525a-cb49-44dd-82c0-0bf1641ec2b5") : configmap "swift-ring-files" not found Jan 31 09:21:31 crc kubenswrapper[4732]: I0131 09:21:31.951402 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:31 crc kubenswrapper[4732]: I0131 09:21:31.951826 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:31 crc kubenswrapper[4732]: E0131 09:21:31.951604 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:21:31 crc kubenswrapper[4732]: E0131 09:21:31.951878 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w: configmap "swift-ring-files" not found Jan 31 09:21:31 crc kubenswrapper[4732]: E0131 09:21:31.951944 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift podName:4d5994e0-d411-4d47-bcbb-1a12020906ce nodeName:}" failed. No retries permitted until 2026-01-31 09:21:35.951916246 +0000 UTC m=+1234.257792450 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift") pod "swift-proxy-77c98d654c-ftt6w" (UID: "4d5994e0-d411-4d47-bcbb-1a12020906ce") : configmap "swift-ring-files" not found Jan 31 09:21:31 crc kubenswrapper[4732]: E0131 09:21:31.951946 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:21:31 crc kubenswrapper[4732]: E0131 09:21:31.951962 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:21:31 crc kubenswrapper[4732]: E0131 09:21:31.951993 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift podName:2d99525a-cb49-44dd-82c0-0bf1641ec2b5 nodeName:}" failed. No retries permitted until 2026-01-31 09:21:35.951978728 +0000 UTC m=+1234.257854932 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift") pod "swift-storage-0" (UID: "2d99525a-cb49-44dd-82c0-0bf1641ec2b5") : configmap "swift-ring-files" not found Jan 31 09:21:31 crc kubenswrapper[4732]: I0131 09:21:31.976793 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-vj28j"] Jan 31 09:21:31 crc kubenswrapper[4732]: I0131 09:21:31.977658 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:31 crc kubenswrapper[4732]: I0131 09:21:31.981037 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 09:21:31 crc kubenswrapper[4732]: I0131 09:21:31.981044 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 09:21:31 crc kubenswrapper[4732]: I0131 09:21:31.986848 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-vj28j"] Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.156021 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0c80b3a6-8701-4276-a6a2-80913e60ea9a-ring-data-devices\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.156095 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4957\" (UniqueName: \"kubernetes.io/projected/0c80b3a6-8701-4276-a6a2-80913e60ea9a-kube-api-access-t4957\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.156225 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0c80b3a6-8701-4276-a6a2-80913e60ea9a-etc-swift\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.156295 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c80b3a6-8701-4276-a6a2-80913e60ea9a-scripts\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.156544 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0c80b3a6-8701-4276-a6a2-80913e60ea9a-dispersionconf\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.156623 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0c80b3a6-8701-4276-a6a2-80913e60ea9a-swiftconf\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.258545 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0c80b3a6-8701-4276-a6a2-80913e60ea9a-etc-swift\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.258764 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c80b3a6-8701-4276-a6a2-80913e60ea9a-scripts\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.258837 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0c80b3a6-8701-4276-a6a2-80913e60ea9a-dispersionconf\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.258864 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0c80b3a6-8701-4276-a6a2-80913e60ea9a-swiftconf\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.258940 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0c80b3a6-8701-4276-a6a2-80913e60ea9a-ring-data-devices\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.258976 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4957\" (UniqueName: \"kubernetes.io/projected/0c80b3a6-8701-4276-a6a2-80913e60ea9a-kube-api-access-t4957\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.259164 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0c80b3a6-8701-4276-a6a2-80913e60ea9a-etc-swift\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.260728 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c80b3a6-8701-4276-a6a2-80913e60ea9a-scripts\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.260983 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0c80b3a6-8701-4276-a6a2-80913e60ea9a-ring-data-devices\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.265867 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0c80b3a6-8701-4276-a6a2-80913e60ea9a-dispersionconf\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.267727 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0c80b3a6-8701-4276-a6a2-80913e60ea9a-swiftconf\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.278323 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4957\" (UniqueName: \"kubernetes.io/projected/0c80b3a6-8701-4276-a6a2-80913e60ea9a-kube-api-access-t4957\") pod \"swift-ring-rebalance-vj28j\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.295645 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:32 crc kubenswrapper[4732]: I0131 09:21:32.705150 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-vj28j"] Jan 31 09:21:32 crc kubenswrapper[4732]: W0131 09:21:32.723051 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c80b3a6_8701_4276_a6a2_80913e60ea9a.slice/crio-b5257e67474cd986634f02ff3791244e4413c3901b1c0a424a805eb7a793b505 WatchSource:0}: Error finding container b5257e67474cd986634f02ff3791244e4413c3901b1c0a424a805eb7a793b505: Status 404 returned error can't find the container with id b5257e67474cd986634f02ff3791244e4413c3901b1c0a424a805eb7a793b505 Jan 31 09:21:33 crc kubenswrapper[4732]: I0131 09:21:33.509972 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" event={"ID":"0c80b3a6-8701-4276-a6a2-80913e60ea9a","Type":"ContainerStarted","Data":"85b861719f4e2c096ba302360733974fc49e5be1c8a6dc54dda1f149625db608"} Jan 31 09:21:33 crc kubenswrapper[4732]: I0131 09:21:33.510487 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" event={"ID":"0c80b3a6-8701-4276-a6a2-80913e60ea9a","Type":"ContainerStarted","Data":"b5257e67474cd986634f02ff3791244e4413c3901b1c0a424a805eb7a793b505"} Jan 31 09:21:33 crc kubenswrapper[4732]: I0131 09:21:33.531875 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" podStartSLOduration=2.5318496809999997 podStartE2EDuration="2.531849681s" podCreationTimestamp="2026-01-31 09:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:21:33.528604921 +0000 UTC m=+1231.834481135" watchObservedRunningTime="2026-01-31 09:21:33.531849681 +0000 UTC m=+1231.837725885" Jan 31 09:21:36 crc kubenswrapper[4732]: I0131 09:21:36.020981 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:36 crc kubenswrapper[4732]: I0131 09:21:36.021294 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:36 crc kubenswrapper[4732]: E0131 09:21:36.021175 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:21:36 crc kubenswrapper[4732]: E0131 09:21:36.021477 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w: configmap "swift-ring-files" not found Jan 31 09:21:36 crc kubenswrapper[4732]: E0131 09:21:36.021519 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift podName:4d5994e0-d411-4d47-bcbb-1a12020906ce nodeName:}" failed. No retries permitted until 2026-01-31 09:21:44.021506706 +0000 UTC m=+1242.327382900 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift") pod "swift-proxy-77c98d654c-ftt6w" (UID: "4d5994e0-d411-4d47-bcbb-1a12020906ce") : configmap "swift-ring-files" not found Jan 31 09:21:36 crc kubenswrapper[4732]: E0131 09:21:36.021461 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:21:36 crc kubenswrapper[4732]: E0131 09:21:36.021811 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:21:36 crc kubenswrapper[4732]: E0131 09:21:36.021834 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift podName:2d99525a-cb49-44dd-82c0-0bf1641ec2b5 nodeName:}" failed. No retries permitted until 2026-01-31 09:21:44.021826616 +0000 UTC m=+1242.327702820 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift") pod "swift-storage-0" (UID: "2d99525a-cb49-44dd-82c0-0bf1641ec2b5") : configmap "swift-ring-files" not found Jan 31 09:21:39 crc kubenswrapper[4732]: I0131 09:21:39.560365 4732 generic.go:334] "Generic (PLEG): container finished" podID="0c80b3a6-8701-4276-a6a2-80913e60ea9a" containerID="85b861719f4e2c096ba302360733974fc49e5be1c8a6dc54dda1f149625db608" exitCode=0 Jan 31 09:21:39 crc kubenswrapper[4732]: I0131 09:21:39.560461 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" event={"ID":"0c80b3a6-8701-4276-a6a2-80913e60ea9a","Type":"ContainerDied","Data":"85b861719f4e2c096ba302360733974fc49e5be1c8a6dc54dda1f149625db608"} Jan 31 09:21:40 crc kubenswrapper[4732]: I0131 09:21:40.830952 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:40 crc kubenswrapper[4732]: I0131 09:21:40.989738 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0c80b3a6-8701-4276-a6a2-80913e60ea9a-etc-swift\") pod \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " Jan 31 09:21:40 crc kubenswrapper[4732]: I0131 09:21:40.989804 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4957\" (UniqueName: \"kubernetes.io/projected/0c80b3a6-8701-4276-a6a2-80913e60ea9a-kube-api-access-t4957\") pod \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " Jan 31 09:21:40 crc kubenswrapper[4732]: I0131 09:21:40.989839 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c80b3a6-8701-4276-a6a2-80913e60ea9a-scripts\") pod \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " Jan 31 09:21:40 crc kubenswrapper[4732]: I0131 09:21:40.990601 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c80b3a6-8701-4276-a6a2-80913e60ea9a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0c80b3a6-8701-4276-a6a2-80913e60ea9a" (UID: "0c80b3a6-8701-4276-a6a2-80913e60ea9a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:21:40 crc kubenswrapper[4732]: I0131 09:21:40.990783 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0c80b3a6-8701-4276-a6a2-80913e60ea9a-dispersionconf\") pod \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " Jan 31 09:21:40 crc kubenswrapper[4732]: I0131 09:21:40.990851 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0c80b3a6-8701-4276-a6a2-80913e60ea9a-ring-data-devices\") pod \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " Jan 31 09:21:40 crc kubenswrapper[4732]: I0131 09:21:40.990920 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0c80b3a6-8701-4276-a6a2-80913e60ea9a-swiftconf\") pod \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\" (UID: \"0c80b3a6-8701-4276-a6a2-80913e60ea9a\") " Jan 31 09:21:40 crc kubenswrapper[4732]: I0131 09:21:40.991277 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c80b3a6-8701-4276-a6a2-80913e60ea9a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0c80b3a6-8701-4276-a6a2-80913e60ea9a" (UID: "0c80b3a6-8701-4276-a6a2-80913e60ea9a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:21:40 crc kubenswrapper[4732]: I0131 09:21:40.991550 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0c80b3a6-8701-4276-a6a2-80913e60ea9a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:40 crc kubenswrapper[4732]: I0131 09:21:40.991573 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0c80b3a6-8701-4276-a6a2-80913e60ea9a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:40 crc kubenswrapper[4732]: I0131 09:21:40.995057 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c80b3a6-8701-4276-a6a2-80913e60ea9a-kube-api-access-t4957" (OuterVolumeSpecName: "kube-api-access-t4957") pod "0c80b3a6-8701-4276-a6a2-80913e60ea9a" (UID: "0c80b3a6-8701-4276-a6a2-80913e60ea9a"). InnerVolumeSpecName "kube-api-access-t4957". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:21:41 crc kubenswrapper[4732]: I0131 09:21:41.007199 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c80b3a6-8701-4276-a6a2-80913e60ea9a-scripts" (OuterVolumeSpecName: "scripts") pod "0c80b3a6-8701-4276-a6a2-80913e60ea9a" (UID: "0c80b3a6-8701-4276-a6a2-80913e60ea9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:21:41 crc kubenswrapper[4732]: I0131 09:21:41.008609 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c80b3a6-8701-4276-a6a2-80913e60ea9a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0c80b3a6-8701-4276-a6a2-80913e60ea9a" (UID: "0c80b3a6-8701-4276-a6a2-80913e60ea9a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:21:41 crc kubenswrapper[4732]: I0131 09:21:41.010431 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c80b3a6-8701-4276-a6a2-80913e60ea9a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0c80b3a6-8701-4276-a6a2-80913e60ea9a" (UID: "0c80b3a6-8701-4276-a6a2-80913e60ea9a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:21:41 crc kubenswrapper[4732]: I0131 09:21:41.092391 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4957\" (UniqueName: \"kubernetes.io/projected/0c80b3a6-8701-4276-a6a2-80913e60ea9a-kube-api-access-t4957\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:41 crc kubenswrapper[4732]: I0131 09:21:41.092443 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c80b3a6-8701-4276-a6a2-80913e60ea9a-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:41 crc kubenswrapper[4732]: I0131 09:21:41.092458 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0c80b3a6-8701-4276-a6a2-80913e60ea9a-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:41 crc kubenswrapper[4732]: I0131 09:21:41.092470 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0c80b3a6-8701-4276-a6a2-80913e60ea9a-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:41 crc kubenswrapper[4732]: I0131 09:21:41.575359 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" event={"ID":"0c80b3a6-8701-4276-a6a2-80913e60ea9a","Type":"ContainerDied","Data":"b5257e67474cd986634f02ff3791244e4413c3901b1c0a424a805eb7a793b505"} Jan 31 09:21:41 crc kubenswrapper[4732]: I0131 09:21:41.575398 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5257e67474cd986634f02ff3791244e4413c3901b1c0a424a805eb7a793b505" Jan 31 09:21:41 crc kubenswrapper[4732]: I0131 09:21:41.575471 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-vj28j" Jan 31 09:21:44 crc kubenswrapper[4732]: I0131 09:21:44.036542 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:44 crc kubenswrapper[4732]: I0131 09:21:44.036868 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:44 crc kubenswrapper[4732]: I0131 09:21:44.043811 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift\") pod \"swift-proxy-77c98d654c-ftt6w\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:44 crc kubenswrapper[4732]: I0131 09:21:44.048494 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift\") pod \"swift-storage-0\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:44 crc kubenswrapper[4732]: I0131 09:21:44.082207 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:21:44 crc kubenswrapper[4732]: I0131 09:21:44.099463 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:44 crc kubenswrapper[4732]: I0131 09:21:44.554874 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w"] Jan 31 09:21:44 crc kubenswrapper[4732]: I0131 09:21:44.602285 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" event={"ID":"4d5994e0-d411-4d47-bcbb-1a12020906ce","Type":"ContainerStarted","Data":"7aec8f717f6235c07faf20c4f2b84f8af9c000ef4913a0dcde781bd2a8cc7aa5"} Jan 31 09:21:44 crc kubenswrapper[4732]: W0131 09:21:44.634458 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d99525a_cb49_44dd_82c0_0bf1641ec2b5.slice/crio-bb37e542519324244c6258336774da979636b6257f7ee208e0822166576c6a8d WatchSource:0}: Error finding container bb37e542519324244c6258336774da979636b6257f7ee208e0822166576c6a8d: Status 404 returned error can't find the container with id bb37e542519324244c6258336774da979636b6257f7ee208e0822166576c6a8d Jan 31 09:21:44 crc kubenswrapper[4732]: I0131 09:21:44.637192 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:21:45 crc kubenswrapper[4732]: I0131 09:21:45.615180 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" event={"ID":"4d5994e0-d411-4d47-bcbb-1a12020906ce","Type":"ContainerStarted","Data":"9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5"} Jan 31 09:21:45 crc kubenswrapper[4732]: I0131 09:21:45.615604 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:45 crc kubenswrapper[4732]: I0131 09:21:45.615621 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" event={"ID":"4d5994e0-d411-4d47-bcbb-1a12020906ce","Type":"ContainerStarted","Data":"db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1"} Jan 31 09:21:45 crc kubenswrapper[4732]: I0131 09:21:45.615636 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:45 crc kubenswrapper[4732]: I0131 09:21:45.619877 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"86becf7472fc9c7648db29999d468c19bd3b7575b10b92c4b9928d2888a627aa"} Jan 31 09:21:45 crc kubenswrapper[4732]: I0131 09:21:45.619919 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"4eb0e325fda950d987c29c36e29c511060e69238d602a70ab284eaba918266e8"} Jan 31 09:21:45 crc kubenswrapper[4732]: I0131 09:21:45.619930 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"3ca70055c8d797fcc887af9ddd4c9b0ea366e17888bc18156cd3c6a4de4dc32b"} Jan 31 09:21:45 crc kubenswrapper[4732]: I0131 09:21:45.619939 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"a064100c3bd45fa424991f4128e58ffe5bcccf2ec5e5089c19cf6692c22ab9c5"} Jan 31 09:21:45 crc kubenswrapper[4732]: I0131 09:21:45.619947 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"47aaccf941b2ffbe89c21b9e0b3a892b4f21c0ac0a9cf298c23bf8752bd80a1a"} Jan 31 09:21:45 crc kubenswrapper[4732]: I0131 09:21:45.619955 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"bb37e542519324244c6258336774da979636b6257f7ee208e0822166576c6a8d"} Jan 31 09:21:45 crc kubenswrapper[4732]: I0131 09:21:45.647545 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" podStartSLOduration=17.647512120000002 podStartE2EDuration="17.64751212s" podCreationTimestamp="2026-01-31 09:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:21:45.634920879 +0000 UTC m=+1243.940797093" watchObservedRunningTime="2026-01-31 09:21:45.64751212 +0000 UTC m=+1243.953388324" Jan 31 09:21:46 crc kubenswrapper[4732]: I0131 09:21:46.633126 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"a018ce2a323e1b0e2aa072d2f93334f3b4e39c300d0ff2335987aa34dde530ce"} Jan 31 09:21:46 crc kubenswrapper[4732]: I0131 09:21:46.633220 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"45ccea61db4e777ffe785a3bd9a6ec4740eee0e550c2a7527a6b1997cee06c20"} Jan 31 09:21:46 crc kubenswrapper[4732]: I0131 09:21:46.633241 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"7d05377fbb40377380b56c2e05a3671b30e05ae9c65dc9991a56bb3e0d776300"} Jan 31 09:21:46 crc kubenswrapper[4732]: I0131 09:21:46.633256 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"55b7d2c297a2c51bfe7bb69e4e050254e0b55aad5ce8bd28be166e236c03b5a4"} Jan 31 09:21:46 crc kubenswrapper[4732]: I0131 09:21:46.633270 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"e3b3c2c088602818d41287973de5c283cc9912805d7ff52354faa3f8b3e0f918"} Jan 31 09:21:47 crc kubenswrapper[4732]: I0131 09:21:47.648765 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"39108093c7167b1324969b66f03261329d95d03eef264a245346322307d2c2ca"} Jan 31 09:21:47 crc kubenswrapper[4732]: I0131 09:21:47.649334 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"3e318603930992b9a03d77a16fdd999a5fbbbc52598af387e3bde175cfb85fae"} Jan 31 09:21:47 crc kubenswrapper[4732]: I0131 09:21:47.649348 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"aafbf67742419bba0b805990b5fa8c2858a89d917bc8094ccad71c66aad20c3c"} Jan 31 09:21:47 crc kubenswrapper[4732]: I0131 09:21:47.649359 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"25397f43691b59da5a03ae8b3f0545fb630ecb37363d9a82b75ece89ba8f270b"} Jan 31 09:21:48 crc kubenswrapper[4732]: I0131 09:21:48.672604 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerStarted","Data":"545bc0375b464b90da540c011cf7e399ffc165b32303354c396c826e74d84e25"} Jan 31 09:21:48 crc kubenswrapper[4732]: I0131 09:21:48.714882 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=21.714860772 podStartE2EDuration="21.714860772s" podCreationTimestamp="2026-01-31 09:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:21:48.705222383 +0000 UTC m=+1247.011098597" watchObservedRunningTime="2026-01-31 09:21:48.714860772 +0000 UTC m=+1247.020736996" Jan 31 09:21:49 crc kubenswrapper[4732]: I0131 09:21:49.109799 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:49 crc kubenswrapper[4732]: I0131 09:21:49.112448 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.374834 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-vj28j"] Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.381348 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-vj28j"] Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.397484 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.553349 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c80b3a6-8701-4276-a6a2-80913e60ea9a" path="/var/lib/kubelet/pods/0c80b3a6-8701-4276-a6a2-80913e60ea9a/volumes" Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.569065 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w"] Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.702892 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-server" containerID="cri-o://47aaccf941b2ffbe89c21b9e0b3a892b4f21c0ac0a9cf298c23bf8752bd80a1a" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703060 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-replicator" containerID="cri-o://e3b3c2c088602818d41287973de5c283cc9912805d7ff52354faa3f8b3e0f918" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703097 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-replicator" containerID="cri-o://a018ce2a323e1b0e2aa072d2f93334f3b4e39c300d0ff2335987aa34dde530ce" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703105 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-updater" containerID="cri-o://7d05377fbb40377380b56c2e05a3671b30e05ae9c65dc9991a56bb3e0d776300" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703161 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-auditor" containerID="cri-o://55b7d2c297a2c51bfe7bb69e4e050254e0b55aad5ce8bd28be166e236c03b5a4" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703181 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-reaper" containerID="cri-o://4eb0e325fda950d987c29c36e29c511060e69238d602a70ab284eaba918266e8" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703109 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-auditor" containerID="cri-o://3ca70055c8d797fcc887af9ddd4c9b0ea366e17888bc18156cd3c6a4de4dc32b" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703166 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-server" containerID="cri-o://86becf7472fc9c7648db29999d468c19bd3b7575b10b92c4b9928d2888a627aa" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703149 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="swift-recon-cron" containerID="cri-o://545bc0375b464b90da540c011cf7e399ffc165b32303354c396c826e74d84e25" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703105 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-server" containerID="cri-o://45ccea61db4e777ffe785a3bd9a6ec4740eee0e550c2a7527a6b1997cee06c20" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703295 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-expirer" containerID="cri-o://3e318603930992b9a03d77a16fdd999a5fbbbc52598af387e3bde175cfb85fae" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703325 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-auditor" containerID="cri-o://25397f43691b59da5a03ae8b3f0545fb630ecb37363d9a82b75ece89ba8f270b" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703332 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-updater" containerID="cri-o://aafbf67742419bba0b805990b5fa8c2858a89d917bc8094ccad71c66aad20c3c" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703301 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="rsync" containerID="cri-o://39108093c7167b1324969b66f03261329d95d03eef264a245346322307d2c2ca" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703132 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-replicator" containerID="cri-o://a064100c3bd45fa424991f4128e58ffe5bcccf2ec5e5089c19cf6692c22ab9c5" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703382 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" podUID="4d5994e0-d411-4d47-bcbb-1a12020906ce" containerName="proxy-httpd" containerID="cri-o://db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1" gracePeriod=30 Jan 31 09:21:50 crc kubenswrapper[4732]: I0131 09:21:50.703422 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" podUID="4d5994e0-d411-4d47-bcbb-1a12020906ce" containerName="proxy-server" containerID="cri-o://9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5" gracePeriod=30 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.297424 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.451540 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5994e0-d411-4d47-bcbb-1a12020906ce-config-data\") pod \"4d5994e0-d411-4d47-bcbb-1a12020906ce\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.451629 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift\") pod \"4d5994e0-d411-4d47-bcbb-1a12020906ce\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.451688 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d5994e0-d411-4d47-bcbb-1a12020906ce-run-httpd\") pod \"4d5994e0-d411-4d47-bcbb-1a12020906ce\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.451731 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d5994e0-d411-4d47-bcbb-1a12020906ce-log-httpd\") pod \"4d5994e0-d411-4d47-bcbb-1a12020906ce\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.451856 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njt7j\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-kube-api-access-njt7j\") pod \"4d5994e0-d411-4d47-bcbb-1a12020906ce\" (UID: \"4d5994e0-d411-4d47-bcbb-1a12020906ce\") " Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.452091 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d5994e0-d411-4d47-bcbb-1a12020906ce-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4d5994e0-d411-4d47-bcbb-1a12020906ce" (UID: "4d5994e0-d411-4d47-bcbb-1a12020906ce"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.452425 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d5994e0-d411-4d47-bcbb-1a12020906ce-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4d5994e0-d411-4d47-bcbb-1a12020906ce" (UID: "4d5994e0-d411-4d47-bcbb-1a12020906ce"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.457860 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-kube-api-access-njt7j" (OuterVolumeSpecName: "kube-api-access-njt7j") pod "4d5994e0-d411-4d47-bcbb-1a12020906ce" (UID: "4d5994e0-d411-4d47-bcbb-1a12020906ce"). InnerVolumeSpecName "kube-api-access-njt7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.458913 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4d5994e0-d411-4d47-bcbb-1a12020906ce" (UID: "4d5994e0-d411-4d47-bcbb-1a12020906ce"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.504896 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5994e0-d411-4d47-bcbb-1a12020906ce-config-data" (OuterVolumeSpecName: "config-data") pod "4d5994e0-d411-4d47-bcbb-1a12020906ce" (UID: "4d5994e0-d411-4d47-bcbb-1a12020906ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.553790 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.553824 4732 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d5994e0-d411-4d47-bcbb-1a12020906ce-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.553834 4732 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d5994e0-d411-4d47-bcbb-1a12020906ce-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.553845 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njt7j\" (UniqueName: \"kubernetes.io/projected/4d5994e0-d411-4d47-bcbb-1a12020906ce-kube-api-access-njt7j\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.553855 4732 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5994e0-d411-4d47-bcbb-1a12020906ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.714475 4732 generic.go:334] "Generic (PLEG): container finished" podID="4d5994e0-d411-4d47-bcbb-1a12020906ce" containerID="9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.714523 4732 generic.go:334] "Generic (PLEG): container finished" podID="4d5994e0-d411-4d47-bcbb-1a12020906ce" containerID="db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.714794 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.714804 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" event={"ID":"4d5994e0-d411-4d47-bcbb-1a12020906ce","Type":"ContainerDied","Data":"9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.714866 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" event={"ID":"4d5994e0-d411-4d47-bcbb-1a12020906ce","Type":"ContainerDied","Data":"db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.714880 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w" event={"ID":"4d5994e0-d411-4d47-bcbb-1a12020906ce","Type":"ContainerDied","Data":"7aec8f717f6235c07faf20c4f2b84f8af9c000ef4913a0dcde781bd2a8cc7aa5"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.714901 4732 scope.go:117] "RemoveContainer" containerID="9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.721939 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="39108093c7167b1324969b66f03261329d95d03eef264a245346322307d2c2ca" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.721981 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="3e318603930992b9a03d77a16fdd999a5fbbbc52598af387e3bde175cfb85fae" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.721992 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="aafbf67742419bba0b805990b5fa8c2858a89d917bc8094ccad71c66aad20c3c" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722003 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="25397f43691b59da5a03ae8b3f0545fb630ecb37363d9a82b75ece89ba8f270b" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722013 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="a018ce2a323e1b0e2aa072d2f93334f3b4e39c300d0ff2335987aa34dde530ce" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722023 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="45ccea61db4e777ffe785a3bd9a6ec4740eee0e550c2a7527a6b1997cee06c20" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722033 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="7d05377fbb40377380b56c2e05a3671b30e05ae9c65dc9991a56bb3e0d776300" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722042 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="55b7d2c297a2c51bfe7bb69e4e050254e0b55aad5ce8bd28be166e236c03b5a4" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722050 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="e3b3c2c088602818d41287973de5c283cc9912805d7ff52354faa3f8b3e0f918" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722058 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="86becf7472fc9c7648db29999d468c19bd3b7575b10b92c4b9928d2888a627aa" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722069 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="4eb0e325fda950d987c29c36e29c511060e69238d602a70ab284eaba918266e8" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722078 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="3ca70055c8d797fcc887af9ddd4c9b0ea366e17888bc18156cd3c6a4de4dc32b" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722087 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="a064100c3bd45fa424991f4128e58ffe5bcccf2ec5e5089c19cf6692c22ab9c5" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722095 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="47aaccf941b2ffbe89c21b9e0b3a892b4f21c0ac0a9cf298c23bf8752bd80a1a" exitCode=0 Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722117 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"39108093c7167b1324969b66f03261329d95d03eef264a245346322307d2c2ca"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722146 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"3e318603930992b9a03d77a16fdd999a5fbbbc52598af387e3bde175cfb85fae"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722160 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"aafbf67742419bba0b805990b5fa8c2858a89d917bc8094ccad71c66aad20c3c"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722176 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"25397f43691b59da5a03ae8b3f0545fb630ecb37363d9a82b75ece89ba8f270b"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722190 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"a018ce2a323e1b0e2aa072d2f93334f3b4e39c300d0ff2335987aa34dde530ce"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722202 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"45ccea61db4e777ffe785a3bd9a6ec4740eee0e550c2a7527a6b1997cee06c20"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722210 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"7d05377fbb40377380b56c2e05a3671b30e05ae9c65dc9991a56bb3e0d776300"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722221 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"55b7d2c297a2c51bfe7bb69e4e050254e0b55aad5ce8bd28be166e236c03b5a4"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722233 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"e3b3c2c088602818d41287973de5c283cc9912805d7ff52354faa3f8b3e0f918"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722245 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"86becf7472fc9c7648db29999d468c19bd3b7575b10b92c4b9928d2888a627aa"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722256 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"4eb0e325fda950d987c29c36e29c511060e69238d602a70ab284eaba918266e8"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722264 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"3ca70055c8d797fcc887af9ddd4c9b0ea366e17888bc18156cd3c6a4de4dc32b"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722274 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"a064100c3bd45fa424991f4128e58ffe5bcccf2ec5e5089c19cf6692c22ab9c5"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.722284 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"47aaccf941b2ffbe89c21b9e0b3a892b4f21c0ac0a9cf298c23bf8752bd80a1a"} Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.735303 4732 scope.go:117] "RemoveContainer" containerID="db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.747901 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w"] Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.753573 4732 scope.go:117] "RemoveContainer" containerID="9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5" Jan 31 09:21:51 crc kubenswrapper[4732]: E0131 09:21:51.754632 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5\": container with ID starting with 9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5 not found: ID does not exist" containerID="9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.754706 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5"} err="failed to get container status \"9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5\": rpc error: code = NotFound desc = could not find container \"9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5\": container with ID starting with 9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5 not found: ID does not exist" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.754743 4732 scope.go:117] "RemoveContainer" containerID="db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1" Jan 31 09:21:51 crc kubenswrapper[4732]: E0131 09:21:51.755059 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1\": container with ID starting with db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1 not found: ID does not exist" containerID="db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.755100 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1"} err="failed to get container status \"db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1\": rpc error: code = NotFound desc = could not find container \"db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1\": container with ID starting with db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1 not found: ID does not exist" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.755119 4732 scope.go:117] "RemoveContainer" containerID="9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.755376 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5"} err="failed to get container status \"9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5\": rpc error: code = NotFound desc = could not find container \"9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5\": container with ID starting with 9b1ff239a7bac22b3621f4ddf89b0954862df7c17ff58d0be28c2216ee9690d5 not found: ID does not exist" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.755405 4732 scope.go:117] "RemoveContainer" containerID="db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.755725 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1"} err="failed to get container status \"db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1\": rpc error: code = NotFound desc = could not find container \"db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1\": container with ID starting with db0d291a904e63a804b3e8e9f1722ea742a8b5887a09628e8fd5a89d8eceb3a1 not found: ID does not exist" Jan 31 09:21:51 crc kubenswrapper[4732]: I0131 09:21:51.760871 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-ftt6w"] Jan 31 09:21:52 crc kubenswrapper[4732]: I0131 09:21:52.557998 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d5994e0-d411-4d47-bcbb-1a12020906ce" path="/var/lib/kubelet/pods/4d5994e0-d411-4d47-bcbb-1a12020906ce/volumes" Jan 31 09:22:17 crc kubenswrapper[4732]: I0131 09:22:17.497409 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:22:17 crc kubenswrapper[4732]: I0131 09:22:17.497984 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:22:20 crc kubenswrapper[4732]: E0131 09:22:20.904996 4732 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d99525a_cb49_44dd_82c0_0bf1641ec2b5.slice/crio-conmon-545bc0375b464b90da540c011cf7e399ffc165b32303354c396c826e74d84e25.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:22:20 crc kubenswrapper[4732]: I0131 09:22:20.981413 4732 generic.go:334] "Generic (PLEG): container finished" podID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerID="545bc0375b464b90da540c011cf7e399ffc165b32303354c396c826e74d84e25" exitCode=137 Jan 31 09:22:20 crc kubenswrapper[4732]: I0131 09:22:20.981777 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"545bc0375b464b90da540c011cf7e399ffc165b32303354c396c826e74d84e25"} Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.047885 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.188036 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qccj\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-kube-api-access-8qccj\") pod \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.188116 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-lock\") pod \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.188141 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-cache\") pod \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.188184 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.188209 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift\") pod \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\" (UID: \"2d99525a-cb49-44dd-82c0-0bf1641ec2b5\") " Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.188721 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-lock" (OuterVolumeSpecName: "lock") pod "2d99525a-cb49-44dd-82c0-0bf1641ec2b5" (UID: "2d99525a-cb49-44dd-82c0-0bf1641ec2b5"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.188933 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-cache" (OuterVolumeSpecName: "cache") pod "2d99525a-cb49-44dd-82c0-0bf1641ec2b5" (UID: "2d99525a-cb49-44dd-82c0-0bf1641ec2b5"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.194215 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "swift") pod "2d99525a-cb49-44dd-82c0-0bf1641ec2b5" (UID: "2d99525a-cb49-44dd-82c0-0bf1641ec2b5"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.194240 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2d99525a-cb49-44dd-82c0-0bf1641ec2b5" (UID: "2d99525a-cb49-44dd-82c0-0bf1641ec2b5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.194266 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-kube-api-access-8qccj" (OuterVolumeSpecName: "kube-api-access-8qccj") pod "2d99525a-cb49-44dd-82c0-0bf1641ec2b5" (UID: "2d99525a-cb49-44dd-82c0-0bf1641ec2b5"). InnerVolumeSpecName "kube-api-access-8qccj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.290481 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qccj\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-kube-api-access-8qccj\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.290533 4732 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-lock\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.290550 4732 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-cache\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.290595 4732 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.290612 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2d99525a-cb49-44dd-82c0-0bf1641ec2b5-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.311530 4732 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 09:22:21 crc kubenswrapper[4732]: I0131 09:22:21.392284 4732 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.032816 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"2d99525a-cb49-44dd-82c0-0bf1641ec2b5","Type":"ContainerDied","Data":"bb37e542519324244c6258336774da979636b6257f7ee208e0822166576c6a8d"} Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.032903 4732 scope.go:117] "RemoveContainer" containerID="545bc0375b464b90da540c011cf7e399ffc165b32303354c396c826e74d84e25" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.033051 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.056745 4732 scope.go:117] "RemoveContainer" containerID="39108093c7167b1324969b66f03261329d95d03eef264a245346322307d2c2ca" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.082571 4732 scope.go:117] "RemoveContainer" containerID="3e318603930992b9a03d77a16fdd999a5fbbbc52598af387e3bde175cfb85fae" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.093223 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.103073 4732 scope.go:117] "RemoveContainer" containerID="aafbf67742419bba0b805990b5fa8c2858a89d917bc8094ccad71c66aad20c3c" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.104910 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.116903 4732 scope.go:117] "RemoveContainer" containerID="25397f43691b59da5a03ae8b3f0545fb630ecb37363d9a82b75ece89ba8f270b" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.130639 4732 scope.go:117] "RemoveContainer" containerID="a018ce2a323e1b0e2aa072d2f93334f3b4e39c300d0ff2335987aa34dde530ce" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.145557 4732 scope.go:117] "RemoveContainer" containerID="45ccea61db4e777ffe785a3bd9a6ec4740eee0e550c2a7527a6b1997cee06c20" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.161595 4732 scope.go:117] "RemoveContainer" containerID="7d05377fbb40377380b56c2e05a3671b30e05ae9c65dc9991a56bb3e0d776300" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.176431 4732 scope.go:117] "RemoveContainer" containerID="55b7d2c297a2c51bfe7bb69e4e050254e0b55aad5ce8bd28be166e236c03b5a4" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.192076 4732 scope.go:117] "RemoveContainer" containerID="e3b3c2c088602818d41287973de5c283cc9912805d7ff52354faa3f8b3e0f918" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.208467 4732 scope.go:117] "RemoveContainer" containerID="86becf7472fc9c7648db29999d468c19bd3b7575b10b92c4b9928d2888a627aa" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.230549 4732 scope.go:117] "RemoveContainer" containerID="4eb0e325fda950d987c29c36e29c511060e69238d602a70ab284eaba918266e8" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.245881 4732 scope.go:117] "RemoveContainer" containerID="3ca70055c8d797fcc887af9ddd4c9b0ea366e17888bc18156cd3c6a4de4dc32b" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.262481 4732 scope.go:117] "RemoveContainer" containerID="a064100c3bd45fa424991f4128e58ffe5bcccf2ec5e5089c19cf6692c22ab9c5" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.281550 4732 scope.go:117] "RemoveContainer" containerID="47aaccf941b2ffbe89c21b9e0b3a892b4f21c0ac0a9cf298c23bf8752bd80a1a" Jan 31 09:22:22 crc kubenswrapper[4732]: I0131 09:22:22.550545 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" path="/var/lib/kubelet/pods/2d99525a-cb49-44dd-82c0-0bf1641ec2b5/volumes" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.108770 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm"] Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109074 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-server" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109090 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-server" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109097 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-auditor" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109103 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-auditor" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109120 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5994e0-d411-4d47-bcbb-1a12020906ce" containerName="proxy-httpd" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109126 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5994e0-d411-4d47-bcbb-1a12020906ce" containerName="proxy-httpd" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109139 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="swift-recon-cron" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109145 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="swift-recon-cron" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109151 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-auditor" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109157 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-auditor" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109163 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-replicator" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109169 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-replicator" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109178 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-server" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109183 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-server" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109191 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c80b3a6-8701-4276-a6a2-80913e60ea9a" containerName="swift-ring-rebalance" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109196 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c80b3a6-8701-4276-a6a2-80913e60ea9a" containerName="swift-ring-rebalance" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109207 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-replicator" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109213 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-replicator" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109224 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-auditor" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109229 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-auditor" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109238 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-replicator" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109244 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-replicator" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109255 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-reaper" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109260 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-reaper" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109266 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-updater" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109274 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-updater" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109280 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5994e0-d411-4d47-bcbb-1a12020906ce" containerName="proxy-server" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109285 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5994e0-d411-4d47-bcbb-1a12020906ce" containerName="proxy-server" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109294 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-updater" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109299 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-updater" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109306 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="rsync" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109311 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="rsync" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109319 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-server" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109325 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-server" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.109334 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-expirer" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109341 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-expirer" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109463 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-server" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109472 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-auditor" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109481 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-server" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109487 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-updater" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109495 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-updater" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109504 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-replicator" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109512 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-replicator" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109520 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c80b3a6-8701-4276-a6a2-80913e60ea9a" containerName="swift-ring-rebalance" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109528 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-auditor" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109536 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-server" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109544 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-replicator" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109552 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5994e0-d411-4d47-bcbb-1a12020906ce" containerName="proxy-httpd" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109560 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="container-auditor" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109568 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="object-expirer" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109576 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5994e0-d411-4d47-bcbb-1a12020906ce" containerName="proxy-server" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109585 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="account-reaper" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109592 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="rsync" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.109600 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d99525a-cb49-44dd-82c0-0bf1641ec2b5" containerName="swift-recon-cron" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.110325 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.113898 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-rq6ln" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.113933 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.113898 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.114021 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"cert-swift-internal-svc" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.114052 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.114250 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"combined-ca-bundle" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.117454 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"cert-swift-public-svc" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.127822 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm"] Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.180308 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.184590 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.191946 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.197037 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.233142 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-run-httpd\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.233199 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmr65\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-kube-api-access-nmr65\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.233223 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.233306 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-combined-ca-bundle\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.233344 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-internal-tls-certs\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.233438 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-config-data\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.233484 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-log-httpd\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.233521 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-public-tls-certs\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334477 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-internal-tls-certs\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334524 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334546 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334568 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-config-data\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334583 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-log-httpd\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334603 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-lock\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334620 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-public-tls-certs\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334642 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334705 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjw2\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-kube-api-access-7sjw2\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334742 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-run-httpd\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334775 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmr65\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-kube-api-access-nmr65\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334797 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334835 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-cache\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.334883 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-combined-ca-bundle\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.335944 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-run-httpd\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.336029 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.336045 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm: configmap "swift-ring-files" not found Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.336083 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift podName:bdedfde8-2a77-4328-8d12-1ed7e7c383d7 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:24.83606696 +0000 UTC m=+1283.141943164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift") pod "swift-proxy-5c474fc7f4-zrlwm" (UID: "bdedfde8-2a77-4328-8d12-1ed7e7c383d7") : configmap "swift-ring-files" not found Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.336029 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-log-httpd\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.340397 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-combined-ca-bundle\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.340664 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-internal-tls-certs\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.341723 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-config-data\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.341850 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-public-tls-certs\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.356366 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmr65\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-kube-api-access-nmr65\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.436403 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.436459 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.436500 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-lock\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.436531 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.436573 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sjw2\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-kube-api-access-7sjw2\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.436633 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-cache\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.437150 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-cache\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.437273 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.437295 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.437337 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift podName:ba8f0576-6adb-407c-b8e0-e4b04f0d47e3 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:24.937321337 +0000 UTC m=+1283.243197541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift") pod "swift-storage-0" (UID: "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3") : configmap "swift-ring-files" not found Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.437880 4732 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") device mount path \"/mnt/openstack/pv11\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.438159 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-lock\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.441438 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.455872 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.457460 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sjw2\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-kube-api-access-7sjw2\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.568088 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-mkrwv"] Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.568904 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.573075 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.573865 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.577248 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-mkrwv"] Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.740451 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84cedd57-5030-425a-8567-ceeda6aa0109-scripts\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.740502 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9np9m\" (UniqueName: \"kubernetes.io/projected/84cedd57-5030-425a-8567-ceeda6aa0109-kube-api-access-9np9m\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.740585 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-dispersionconf\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.740731 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/84cedd57-5030-425a-8567-ceeda6aa0109-ring-data-devices\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.740829 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-combined-ca-bundle\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.740954 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-swiftconf\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.741013 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/84cedd57-5030-425a-8567-ceeda6aa0109-etc-swift\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.842189 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9np9m\" (UniqueName: \"kubernetes.io/projected/84cedd57-5030-425a-8567-ceeda6aa0109-kube-api-access-9np9m\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.842270 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-dispersionconf\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.842323 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/84cedd57-5030-425a-8567-ceeda6aa0109-ring-data-devices\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.842384 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-combined-ca-bundle\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.842447 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.842497 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-swiftconf\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.842577 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/84cedd57-5030-425a-8567-ceeda6aa0109-etc-swift\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.842685 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84cedd57-5030-425a-8567-ceeda6aa0109-scripts\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.843135 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.843198 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm: configmap "swift-ring-files" not found Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.843309 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift podName:bdedfde8-2a77-4328-8d12-1ed7e7c383d7 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:25.843260255 +0000 UTC m=+1284.149136499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift") pod "swift-proxy-5c474fc7f4-zrlwm" (UID: "bdedfde8-2a77-4328-8d12-1ed7e7c383d7") : configmap "swift-ring-files" not found Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.844088 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/84cedd57-5030-425a-8567-ceeda6aa0109-etc-swift\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.844319 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84cedd57-5030-425a-8567-ceeda6aa0109-scripts\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.844835 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/84cedd57-5030-425a-8567-ceeda6aa0109-ring-data-devices\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.848266 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-combined-ca-bundle\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.848429 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-swiftconf\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.852298 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-dispersionconf\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.861384 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9np9m\" (UniqueName: \"kubernetes.io/projected/84cedd57-5030-425a-8567-ceeda6aa0109-kube-api-access-9np9m\") pod \"swift-ring-rebalance-mkrwv\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.890539 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:24 crc kubenswrapper[4732]: I0131 09:22:24.944083 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.944302 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.944460 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:22:24 crc kubenswrapper[4732]: E0131 09:22:24.944515 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift podName:ba8f0576-6adb-407c-b8e0-e4b04f0d47e3 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:25.944497342 +0000 UTC m=+1284.250373546 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift") pod "swift-storage-0" (UID: "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3") : configmap "swift-ring-files" not found Jan 31 09:22:25 crc kubenswrapper[4732]: I0131 09:22:25.395103 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-mkrwv"] Jan 31 09:22:25 crc kubenswrapper[4732]: W0131 09:22:25.405930 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84cedd57_5030_425a_8567_ceeda6aa0109.slice/crio-96c45711338fec70c8940230429e8a7ce9d16f40c43935822502c794635844c0 WatchSource:0}: Error finding container 96c45711338fec70c8940230429e8a7ce9d16f40c43935822502c794635844c0: Status 404 returned error can't find the container with id 96c45711338fec70c8940230429e8a7ce9d16f40c43935822502c794635844c0 Jan 31 09:22:25 crc kubenswrapper[4732]: I0131 09:22:25.857679 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:25 crc kubenswrapper[4732]: E0131 09:22:25.857807 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:25 crc kubenswrapper[4732]: E0131 09:22:25.857827 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm: configmap "swift-ring-files" not found Jan 31 09:22:25 crc kubenswrapper[4732]: E0131 09:22:25.857885 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift podName:bdedfde8-2a77-4328-8d12-1ed7e7c383d7 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:27.857868173 +0000 UTC m=+1286.163744377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift") pod "swift-proxy-5c474fc7f4-zrlwm" (UID: "bdedfde8-2a77-4328-8d12-1ed7e7c383d7") : configmap "swift-ring-files" not found Jan 31 09:22:25 crc kubenswrapper[4732]: I0131 09:22:25.959664 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:25 crc kubenswrapper[4732]: E0131 09:22:25.959871 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:25 crc kubenswrapper[4732]: E0131 09:22:25.959975 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:22:25 crc kubenswrapper[4732]: E0131 09:22:25.960023 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift podName:ba8f0576-6adb-407c-b8e0-e4b04f0d47e3 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:27.960007878 +0000 UTC m=+1286.265884082 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift") pod "swift-storage-0" (UID: "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3") : configmap "swift-ring-files" not found Jan 31 09:22:26 crc kubenswrapper[4732]: I0131 09:22:26.068140 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" event={"ID":"84cedd57-5030-425a-8567-ceeda6aa0109","Type":"ContainerStarted","Data":"e9a704782d501296be317dbceec99e3b7ae21d704b38e927a87174fe7c4bd3f1"} Jan 31 09:22:26 crc kubenswrapper[4732]: I0131 09:22:26.068180 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" event={"ID":"84cedd57-5030-425a-8567-ceeda6aa0109","Type":"ContainerStarted","Data":"96c45711338fec70c8940230429e8a7ce9d16f40c43935822502c794635844c0"} Jan 31 09:22:26 crc kubenswrapper[4732]: I0131 09:22:26.089523 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" podStartSLOduration=2.089504661 podStartE2EDuration="2.089504661s" podCreationTimestamp="2026-01-31 09:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:22:26.087292762 +0000 UTC m=+1284.393168976" watchObservedRunningTime="2026-01-31 09:22:26.089504661 +0000 UTC m=+1284.395380865" Jan 31 09:22:27 crc kubenswrapper[4732]: I0131 09:22:27.893417 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:27 crc kubenswrapper[4732]: E0131 09:22:27.893650 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:27 crc kubenswrapper[4732]: E0131 09:22:27.894041 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm: configmap "swift-ring-files" not found Jan 31 09:22:27 crc kubenswrapper[4732]: E0131 09:22:27.894104 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift podName:bdedfde8-2a77-4328-8d12-1ed7e7c383d7 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:31.894083785 +0000 UTC m=+1290.199959989 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift") pod "swift-proxy-5c474fc7f4-zrlwm" (UID: "bdedfde8-2a77-4328-8d12-1ed7e7c383d7") : configmap "swift-ring-files" not found Jan 31 09:22:27 crc kubenswrapper[4732]: I0131 09:22:27.995807 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:27 crc kubenswrapper[4732]: E0131 09:22:27.996251 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:27 crc kubenswrapper[4732]: E0131 09:22:27.996353 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:22:27 crc kubenswrapper[4732]: E0131 09:22:27.996479 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift podName:ba8f0576-6adb-407c-b8e0-e4b04f0d47e3 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:31.996452608 +0000 UTC m=+1290.302328842 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift") pod "swift-storage-0" (UID: "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3") : configmap "swift-ring-files" not found Jan 31 09:22:31 crc kubenswrapper[4732]: I0131 09:22:31.955930 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:31 crc kubenswrapper[4732]: E0131 09:22:31.956177 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:31 crc kubenswrapper[4732]: E0131 09:22:31.956885 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm: configmap "swift-ring-files" not found Jan 31 09:22:31 crc kubenswrapper[4732]: E0131 09:22:31.956968 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift podName:bdedfde8-2a77-4328-8d12-1ed7e7c383d7 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:39.956943204 +0000 UTC m=+1298.262819408 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift") pod "swift-proxy-5c474fc7f4-zrlwm" (UID: "bdedfde8-2a77-4328-8d12-1ed7e7c383d7") : configmap "swift-ring-files" not found Jan 31 09:22:32 crc kubenswrapper[4732]: I0131 09:22:32.057950 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:32 crc kubenswrapper[4732]: E0131 09:22:32.058156 4732 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 09:22:32 crc kubenswrapper[4732]: E0131 09:22:32.058170 4732 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 09:22:32 crc kubenswrapper[4732]: E0131 09:22:32.058214 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift podName:ba8f0576-6adb-407c-b8e0-e4b04f0d47e3 nodeName:}" failed. No retries permitted until 2026-01-31 09:22:40.058198942 +0000 UTC m=+1298.364075136 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift") pod "swift-storage-0" (UID: "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3") : configmap "swift-ring-files" not found Jan 31 09:22:33 crc kubenswrapper[4732]: I0131 09:22:33.146243 4732 generic.go:334] "Generic (PLEG): container finished" podID="84cedd57-5030-425a-8567-ceeda6aa0109" containerID="e9a704782d501296be317dbceec99e3b7ae21d704b38e927a87174fe7c4bd3f1" exitCode=0 Jan 31 09:22:33 crc kubenswrapper[4732]: I0131 09:22:33.146291 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" event={"ID":"84cedd57-5030-425a-8567-ceeda6aa0109","Type":"ContainerDied","Data":"e9a704782d501296be317dbceec99e3b7ae21d704b38e927a87174fe7c4bd3f1"} Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.450493 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.499622 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/84cedd57-5030-425a-8567-ceeda6aa0109-etc-swift\") pod \"84cedd57-5030-425a-8567-ceeda6aa0109\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.499748 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84cedd57-5030-425a-8567-ceeda6aa0109-scripts\") pod \"84cedd57-5030-425a-8567-ceeda6aa0109\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.499791 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-swiftconf\") pod \"84cedd57-5030-425a-8567-ceeda6aa0109\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.499872 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9np9m\" (UniqueName: \"kubernetes.io/projected/84cedd57-5030-425a-8567-ceeda6aa0109-kube-api-access-9np9m\") pod \"84cedd57-5030-425a-8567-ceeda6aa0109\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.499909 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-dispersionconf\") pod \"84cedd57-5030-425a-8567-ceeda6aa0109\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.499952 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-combined-ca-bundle\") pod \"84cedd57-5030-425a-8567-ceeda6aa0109\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.500085 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/84cedd57-5030-425a-8567-ceeda6aa0109-ring-data-devices\") pod \"84cedd57-5030-425a-8567-ceeda6aa0109\" (UID: \"84cedd57-5030-425a-8567-ceeda6aa0109\") " Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.501290 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84cedd57-5030-425a-8567-ceeda6aa0109-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "84cedd57-5030-425a-8567-ceeda6aa0109" (UID: "84cedd57-5030-425a-8567-ceeda6aa0109"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.502897 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84cedd57-5030-425a-8567-ceeda6aa0109-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "84cedd57-5030-425a-8567-ceeda6aa0109" (UID: "84cedd57-5030-425a-8567-ceeda6aa0109"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.508964 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84cedd57-5030-425a-8567-ceeda6aa0109-kube-api-access-9np9m" (OuterVolumeSpecName: "kube-api-access-9np9m") pod "84cedd57-5030-425a-8567-ceeda6aa0109" (UID: "84cedd57-5030-425a-8567-ceeda6aa0109"). InnerVolumeSpecName "kube-api-access-9np9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.526140 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84cedd57-5030-425a-8567-ceeda6aa0109-scripts" (OuterVolumeSpecName: "scripts") pod "84cedd57-5030-425a-8567-ceeda6aa0109" (UID: "84cedd57-5030-425a-8567-ceeda6aa0109"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.526236 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "84cedd57-5030-425a-8567-ceeda6aa0109" (UID: "84cedd57-5030-425a-8567-ceeda6aa0109"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.547644 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84cedd57-5030-425a-8567-ceeda6aa0109" (UID: "84cedd57-5030-425a-8567-ceeda6aa0109"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.547728 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "84cedd57-5030-425a-8567-ceeda6aa0109" (UID: "84cedd57-5030-425a-8567-ceeda6aa0109"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.601481 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9np9m\" (UniqueName: \"kubernetes.io/projected/84cedd57-5030-425a-8567-ceeda6aa0109-kube-api-access-9np9m\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.601514 4732 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.601523 4732 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.601532 4732 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/84cedd57-5030-425a-8567-ceeda6aa0109-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.601541 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/84cedd57-5030-425a-8567-ceeda6aa0109-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.601550 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84cedd57-5030-425a-8567-ceeda6aa0109-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:34 crc kubenswrapper[4732]: I0131 09:22:34.601560 4732 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/84cedd57-5030-425a-8567-ceeda6aa0109-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:35 crc kubenswrapper[4732]: I0131 09:22:35.162345 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" event={"ID":"84cedd57-5030-425a-8567-ceeda6aa0109","Type":"ContainerDied","Data":"96c45711338fec70c8940230429e8a7ce9d16f40c43935822502c794635844c0"} Jan 31 09:22:35 crc kubenswrapper[4732]: I0131 09:22:35.162402 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96c45711338fec70c8940230429e8a7ce9d16f40c43935822502c794635844c0" Jan 31 09:22:35 crc kubenswrapper[4732]: I0131 09:22:35.162427 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-mkrwv" Jan 31 09:22:39 crc kubenswrapper[4732]: I0131 09:22:39.980222 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:39 crc kubenswrapper[4732]: I0131 09:22:39.989876 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift\") pod \"swift-proxy-5c474fc7f4-zrlwm\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:40 crc kubenswrapper[4732]: I0131 09:22:40.025304 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:40 crc kubenswrapper[4732]: I0131 09:22:40.081843 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:40 crc kubenswrapper[4732]: I0131 09:22:40.087946 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift\") pod \"swift-storage-0\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:40 crc kubenswrapper[4732]: I0131 09:22:40.100351 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:22:40 crc kubenswrapper[4732]: I0131 09:22:40.443584 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm"] Jan 31 09:22:40 crc kubenswrapper[4732]: I0131 09:22:40.553883 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:22:40 crc kubenswrapper[4732]: W0131 09:22:40.554378 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba8f0576_6adb_407c_b8e0_e4b04f0d47e3.slice/crio-db289e5d4324b07ffd624072a8984010d2c9a44d1db929557bee6f1a1928fda3 WatchSource:0}: Error finding container db289e5d4324b07ffd624072a8984010d2c9a44d1db929557bee6f1a1928fda3: Status 404 returned error can't find the container with id db289e5d4324b07ffd624072a8984010d2c9a44d1db929557bee6f1a1928fda3 Jan 31 09:22:41 crc kubenswrapper[4732]: I0131 09:22:41.239495 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" event={"ID":"bdedfde8-2a77-4328-8d12-1ed7e7c383d7","Type":"ContainerStarted","Data":"474b94ec58984f199d9081036362611a027ce433aaf4a25491214d41cf2633fe"} Jan 31 09:22:41 crc kubenswrapper[4732]: I0131 09:22:41.239789 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" event={"ID":"bdedfde8-2a77-4328-8d12-1ed7e7c383d7","Type":"ContainerStarted","Data":"ec8ba38d54836eb416cdf4f64a5064c1d48567987a1990cab21c63cec5dcea69"} Jan 31 09:22:41 crc kubenswrapper[4732]: I0131 09:22:41.239803 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" event={"ID":"bdedfde8-2a77-4328-8d12-1ed7e7c383d7","Type":"ContainerStarted","Data":"1f6dfd5148a93a535a200ac9b68a844f393cc7142eb0dbb00e04c6a279302eea"} Jan 31 09:22:41 crc kubenswrapper[4732]: I0131 09:22:41.241713 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:41 crc kubenswrapper[4732]: I0131 09:22:41.241742 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:41 crc kubenswrapper[4732]: I0131 09:22:41.244007 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"13d7343d1751fb8940398ae0071750f1f6117ba7b59e7dfa6990eb52a895303e"} Jan 31 09:22:41 crc kubenswrapper[4732]: I0131 09:22:41.244050 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"3ebca277c4b203371cd9fe26b95bfe03d230345a47fb88a2eff217b2ba6b95b9"} Jan 31 09:22:41 crc kubenswrapper[4732]: I0131 09:22:41.244059 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"a26623b23e379dae3834fe0d0647fe0c758307c675c5178e21ed5e70d77db572"} Jan 31 09:22:41 crc kubenswrapper[4732]: I0131 09:22:41.244067 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"cd4637a4eac10985ea1fdfa77841cc57b4b13086ee7a639ec9095e4b5fe6443f"} Jan 31 09:22:41 crc kubenswrapper[4732]: I0131 09:22:41.244077 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"db289e5d4324b07ffd624072a8984010d2c9a44d1db929557bee6f1a1928fda3"} Jan 31 09:22:41 crc kubenswrapper[4732]: I0131 09:22:41.268863 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" podStartSLOduration=17.268846254 podStartE2EDuration="17.268846254s" podCreationTimestamp="2026-01-31 09:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:22:41.264778188 +0000 UTC m=+1299.570654392" watchObservedRunningTime="2026-01-31 09:22:41.268846254 +0000 UTC m=+1299.574722458" Jan 31 09:22:42 crc kubenswrapper[4732]: I0131 09:22:42.259427 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"78c47a8e4097580d0d62fedad615c2e7de129377d57e4dde3ee9b8817b3ebe4f"} Jan 31 09:22:42 crc kubenswrapper[4732]: I0131 09:22:42.259684 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"a9463f09258cd249e7ac5eeabeac6f271d8661141768b16e79dc29de794bbfee"} Jan 31 09:22:42 crc kubenswrapper[4732]: I0131 09:22:42.259710 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"92b05acc8700ea245d97120676710a498d03141bf1d8600bf5a37e131c2283ee"} Jan 31 09:22:42 crc kubenswrapper[4732]: I0131 09:22:42.259721 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"c2c74e7d8a1ebd17ed649b720d8fb589be94684e535f53dd916a2699096a6a03"} Jan 31 09:22:42 crc kubenswrapper[4732]: I0131 09:22:42.259732 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"4b19ce1044930f4e1d8867e77cdf09526d8f2b3e3900dc1f91071adaf1162e32"} Jan 31 09:22:42 crc kubenswrapper[4732]: I0131 09:22:42.259743 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"f3b6deecf746ea640b42712bfb03bb8b827bed6b84301b438ba02ffe4845a922"} Jan 31 09:22:43 crc kubenswrapper[4732]: I0131 09:22:43.274406 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"08aa6d73b6ce55a5fd4af30b26217cd60a1641aca38733b3a337dfbea21a313c"} Jan 31 09:22:43 crc kubenswrapper[4732]: I0131 09:22:43.275464 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"21e962a9a2a9342a7ea4a394d67dccb2bcba098409718fcc20c041b19f8e8aff"} Jan 31 09:22:43 crc kubenswrapper[4732]: I0131 09:22:43.275550 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"5274a5fdf883aabb8e1459ab78b419e45850c93bd25d640d9b574e423e14891c"} Jan 31 09:22:43 crc kubenswrapper[4732]: I0131 09:22:43.275615 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"d2fec506922e973332f3fc9bd8c7f3a11bbfe136d64d26f49002b37bbb05e717"} Jan 31 09:22:43 crc kubenswrapper[4732]: I0131 09:22:43.275685 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerStarted","Data":"17c4084b4095035a8bb4f76d7014f2798296fac02e063cfc7abffe90d8b7294f"} Jan 31 09:22:45 crc kubenswrapper[4732]: I0131 09:22:45.035786 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:45 crc kubenswrapper[4732]: I0131 09:22:45.039108 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:45 crc kubenswrapper[4732]: I0131 09:22:45.061813 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=22.061794319 podStartE2EDuration="22.061794319s" podCreationTimestamp="2026-01-31 09:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:22:43.330419252 +0000 UTC m=+1301.636295546" watchObservedRunningTime="2026-01-31 09:22:45.061794319 +0000 UTC m=+1303.367670523" Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.578952 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.580795 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-server" containerID="cri-o://cd4637a4eac10985ea1fdfa77841cc57b4b13086ee7a639ec9095e4b5fe6443f" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.580963 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-auditor" containerID="cri-o://08aa6d73b6ce55a5fd4af30b26217cd60a1641aca38733b3a337dfbea21a313c" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.580997 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-expirer" containerID="cri-o://d2fec506922e973332f3fc9bd8c7f3a11bbfe136d64d26f49002b37bbb05e717" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.580997 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-replicator" containerID="cri-o://4b19ce1044930f4e1d8867e77cdf09526d8f2b3e3900dc1f91071adaf1162e32" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.581056 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-replicator" containerID="cri-o://a26623b23e379dae3834fe0d0647fe0c758307c675c5178e21ed5e70d77db572" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.580902 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-auditor" containerID="cri-o://c2c74e7d8a1ebd17ed649b720d8fb589be94684e535f53dd916a2699096a6a03" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.581088 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-updater" containerID="cri-o://17c4084b4095035a8bb4f76d7014f2798296fac02e063cfc7abffe90d8b7294f" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.581136 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="rsync" containerID="cri-o://5274a5fdf883aabb8e1459ab78b419e45850c93bd25d640d9b574e423e14891c" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.580943 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-auditor" containerID="cri-o://3ebca277c4b203371cd9fe26b95bfe03d230345a47fb88a2eff217b2ba6b95b9" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.581163 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-replicator" containerID="cri-o://78c47a8e4097580d0d62fedad615c2e7de129377d57e4dde3ee9b8817b3ebe4f" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.580987 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-updater" containerID="cri-o://92b05acc8700ea245d97120676710a498d03141bf1d8600bf5a37e131c2283ee" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.580953 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-reaper" containerID="cri-o://13d7343d1751fb8940398ae0071750f1f6117ba7b59e7dfa6990eb52a895303e" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.580955 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-server" containerID="cri-o://f3b6deecf746ea640b42712bfb03bb8b827bed6b84301b438ba02ffe4845a922" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.580855 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-server" containerID="cri-o://a9463f09258cd249e7ac5eeabeac6f271d8661141768b16e79dc29de794bbfee" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.581062 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="swift-recon-cron" containerID="cri-o://21e962a9a2a9342a7ea4a394d67dccb2bcba098409718fcc20c041b19f8e8aff" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.611195 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-mkrwv"] Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.637417 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-mkrwv"] Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.651130 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm"] Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.651360 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" podUID="bdedfde8-2a77-4328-8d12-1ed7e7c383d7" containerName="proxy-httpd" containerID="cri-o://ec8ba38d54836eb416cdf4f64a5064c1d48567987a1990cab21c63cec5dcea69" gracePeriod=30 Jan 31 09:22:46 crc kubenswrapper[4732]: I0131 09:22:46.651486 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" podUID="bdedfde8-2a77-4328-8d12-1ed7e7c383d7" containerName="proxy-server" containerID="cri-o://474b94ec58984f199d9081036362611a027ce433aaf4a25491214d41cf2633fe" gracePeriod=30 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.307873 4732 generic.go:334] "Generic (PLEG): container finished" podID="bdedfde8-2a77-4328-8d12-1ed7e7c383d7" containerID="ec8ba38d54836eb416cdf4f64a5064c1d48567987a1990cab21c63cec5dcea69" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.307937 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" event={"ID":"bdedfde8-2a77-4328-8d12-1ed7e7c383d7","Type":"ContainerDied","Data":"ec8ba38d54836eb416cdf4f64a5064c1d48567987a1990cab21c63cec5dcea69"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.312947 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="5274a5fdf883aabb8e1459ab78b419e45850c93bd25d640d9b574e423e14891c" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.312976 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="d2fec506922e973332f3fc9bd8c7f3a11bbfe136d64d26f49002b37bbb05e717" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.312985 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="17c4084b4095035a8bb4f76d7014f2798296fac02e063cfc7abffe90d8b7294f" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.312993 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="08aa6d73b6ce55a5fd4af30b26217cd60a1641aca38733b3a337dfbea21a313c" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313001 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="78c47a8e4097580d0d62fedad615c2e7de129377d57e4dde3ee9b8817b3ebe4f" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313008 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="a9463f09258cd249e7ac5eeabeac6f271d8661141768b16e79dc29de794bbfee" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313015 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="92b05acc8700ea245d97120676710a498d03141bf1d8600bf5a37e131c2283ee" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313021 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="c2c74e7d8a1ebd17ed649b720d8fb589be94684e535f53dd916a2699096a6a03" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313028 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="4b19ce1044930f4e1d8867e77cdf09526d8f2b3e3900dc1f91071adaf1162e32" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313034 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="f3b6deecf746ea640b42712bfb03bb8b827bed6b84301b438ba02ffe4845a922" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313040 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="13d7343d1751fb8940398ae0071750f1f6117ba7b59e7dfa6990eb52a895303e" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313046 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="3ebca277c4b203371cd9fe26b95bfe03d230345a47fb88a2eff217b2ba6b95b9" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313052 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="a26623b23e379dae3834fe0d0647fe0c758307c675c5178e21ed5e70d77db572" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313059 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="cd4637a4eac10985ea1fdfa77841cc57b4b13086ee7a639ec9095e4b5fe6443f" exitCode=0 Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313076 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"5274a5fdf883aabb8e1459ab78b419e45850c93bd25d640d9b574e423e14891c"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313096 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"d2fec506922e973332f3fc9bd8c7f3a11bbfe136d64d26f49002b37bbb05e717"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313106 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"17c4084b4095035a8bb4f76d7014f2798296fac02e063cfc7abffe90d8b7294f"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313115 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"08aa6d73b6ce55a5fd4af30b26217cd60a1641aca38733b3a337dfbea21a313c"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313123 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"78c47a8e4097580d0d62fedad615c2e7de129377d57e4dde3ee9b8817b3ebe4f"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313131 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"a9463f09258cd249e7ac5eeabeac6f271d8661141768b16e79dc29de794bbfee"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313139 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"92b05acc8700ea245d97120676710a498d03141bf1d8600bf5a37e131c2283ee"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313148 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"c2c74e7d8a1ebd17ed649b720d8fb589be94684e535f53dd916a2699096a6a03"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313156 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"4b19ce1044930f4e1d8867e77cdf09526d8f2b3e3900dc1f91071adaf1162e32"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313164 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"f3b6deecf746ea640b42712bfb03bb8b827bed6b84301b438ba02ffe4845a922"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313172 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"13d7343d1751fb8940398ae0071750f1f6117ba7b59e7dfa6990eb52a895303e"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313180 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"3ebca277c4b203371cd9fe26b95bfe03d230345a47fb88a2eff217b2ba6b95b9"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313188 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"a26623b23e379dae3834fe0d0647fe0c758307c675c5178e21ed5e70d77db572"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.313196 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"cd4637a4eac10985ea1fdfa77841cc57b4b13086ee7a639ec9095e4b5fe6443f"} Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.497590 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.497640 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.588263 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.709591 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-public-tls-certs\") pod \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.709638 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-config-data\") pod \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.709677 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-run-httpd\") pod \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.710048 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bdedfde8-2a77-4328-8d12-1ed7e7c383d7" (UID: "bdedfde8-2a77-4328-8d12-1ed7e7c383d7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.710092 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmr65\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-kube-api-access-nmr65\") pod \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.710486 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-combined-ca-bundle\") pod \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.710535 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift\") pod \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.710555 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-log-httpd\") pod \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.710585 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-internal-tls-certs\") pod \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\" (UID: \"bdedfde8-2a77-4328-8d12-1ed7e7c383d7\") " Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.710896 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bdedfde8-2a77-4328-8d12-1ed7e7c383d7" (UID: "bdedfde8-2a77-4328-8d12-1ed7e7c383d7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.711246 4732 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.711262 4732 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.715420 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-kube-api-access-nmr65" (OuterVolumeSpecName: "kube-api-access-nmr65") pod "bdedfde8-2a77-4328-8d12-1ed7e7c383d7" (UID: "bdedfde8-2a77-4328-8d12-1ed7e7c383d7"). InnerVolumeSpecName "kube-api-access-nmr65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.728855 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bdedfde8-2a77-4328-8d12-1ed7e7c383d7" (UID: "bdedfde8-2a77-4328-8d12-1ed7e7c383d7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.771913 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-config-data" (OuterVolumeSpecName: "config-data") pod "bdedfde8-2a77-4328-8d12-1ed7e7c383d7" (UID: "bdedfde8-2a77-4328-8d12-1ed7e7c383d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.775445 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdedfde8-2a77-4328-8d12-1ed7e7c383d7" (UID: "bdedfde8-2a77-4328-8d12-1ed7e7c383d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.778780 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bdedfde8-2a77-4328-8d12-1ed7e7c383d7" (UID: "bdedfde8-2a77-4328-8d12-1ed7e7c383d7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.812102 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.812136 4732 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.812148 4732 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.812159 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmr65\" (UniqueName: \"kubernetes.io/projected/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-kube-api-access-nmr65\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.812168 4732 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.814090 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bdedfde8-2a77-4328-8d12-1ed7e7c383d7" (UID: "bdedfde8-2a77-4328-8d12-1ed7e7c383d7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:22:47 crc kubenswrapper[4732]: I0131 09:22:47.913883 4732 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdedfde8-2a77-4328-8d12-1ed7e7c383d7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.325565 4732 generic.go:334] "Generic (PLEG): container finished" podID="bdedfde8-2a77-4328-8d12-1ed7e7c383d7" containerID="474b94ec58984f199d9081036362611a027ce433aaf4a25491214d41cf2633fe" exitCode=0 Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.325607 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" event={"ID":"bdedfde8-2a77-4328-8d12-1ed7e7c383d7","Type":"ContainerDied","Data":"474b94ec58984f199d9081036362611a027ce433aaf4a25491214d41cf2633fe"} Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.325638 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" event={"ID":"bdedfde8-2a77-4328-8d12-1ed7e7c383d7","Type":"ContainerDied","Data":"1f6dfd5148a93a535a200ac9b68a844f393cc7142eb0dbb00e04c6a279302eea"} Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.325685 4732 scope.go:117] "RemoveContainer" containerID="474b94ec58984f199d9081036362611a027ce433aaf4a25491214d41cf2633fe" Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.325870 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm" Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.358543 4732 scope.go:117] "RemoveContainer" containerID="ec8ba38d54836eb416cdf4f64a5064c1d48567987a1990cab21c63cec5dcea69" Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.361942 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm"] Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.368596 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-zrlwm"] Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.377746 4732 scope.go:117] "RemoveContainer" containerID="474b94ec58984f199d9081036362611a027ce433aaf4a25491214d41cf2633fe" Jan 31 09:22:48 crc kubenswrapper[4732]: E0131 09:22:48.378326 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"474b94ec58984f199d9081036362611a027ce433aaf4a25491214d41cf2633fe\": container with ID starting with 474b94ec58984f199d9081036362611a027ce433aaf4a25491214d41cf2633fe not found: ID does not exist" containerID="474b94ec58984f199d9081036362611a027ce433aaf4a25491214d41cf2633fe" Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.378393 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"474b94ec58984f199d9081036362611a027ce433aaf4a25491214d41cf2633fe"} err="failed to get container status \"474b94ec58984f199d9081036362611a027ce433aaf4a25491214d41cf2633fe\": rpc error: code = NotFound desc = could not find container \"474b94ec58984f199d9081036362611a027ce433aaf4a25491214d41cf2633fe\": container with ID starting with 474b94ec58984f199d9081036362611a027ce433aaf4a25491214d41cf2633fe not found: ID does not exist" Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.378436 4732 scope.go:117] "RemoveContainer" containerID="ec8ba38d54836eb416cdf4f64a5064c1d48567987a1990cab21c63cec5dcea69" Jan 31 09:22:48 crc kubenswrapper[4732]: E0131 09:22:48.378966 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec8ba38d54836eb416cdf4f64a5064c1d48567987a1990cab21c63cec5dcea69\": container with ID starting with ec8ba38d54836eb416cdf4f64a5064c1d48567987a1990cab21c63cec5dcea69 not found: ID does not exist" containerID="ec8ba38d54836eb416cdf4f64a5064c1d48567987a1990cab21c63cec5dcea69" Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.378997 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8ba38d54836eb416cdf4f64a5064c1d48567987a1990cab21c63cec5dcea69"} err="failed to get container status \"ec8ba38d54836eb416cdf4f64a5064c1d48567987a1990cab21c63cec5dcea69\": rpc error: code = NotFound desc = could not find container \"ec8ba38d54836eb416cdf4f64a5064c1d48567987a1990cab21c63cec5dcea69\": container with ID starting with ec8ba38d54836eb416cdf4f64a5064c1d48567987a1990cab21c63cec5dcea69 not found: ID does not exist" Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.556412 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84cedd57-5030-425a-8567-ceeda6aa0109" path="/var/lib/kubelet/pods/84cedd57-5030-425a-8567-ceeda6aa0109/volumes" Jan 31 09:22:48 crc kubenswrapper[4732]: I0131 09:22:48.557545 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdedfde8-2a77-4328-8d12-1ed7e7c383d7" path="/var/lib/kubelet/pods/bdedfde8-2a77-4328-8d12-1ed7e7c383d7/volumes" Jan 31 09:23:04 crc kubenswrapper[4732]: I0131 09:23:04.176241 4732 scope.go:117] "RemoveContainer" containerID="ee0c5fb66ad6b1e6b4f42d6ec9cbbd6de131ce1fe2d36272c309841e4eaa9b41" Jan 31 09:23:04 crc kubenswrapper[4732]: I0131 09:23:04.197838 4732 scope.go:117] "RemoveContainer" containerID="a84f6527d8e718a0c39029fc384c436e9dc135176afd05bca86ac5610704533f" Jan 31 09:23:04 crc kubenswrapper[4732]: I0131 09:23:04.215719 4732 scope.go:117] "RemoveContainer" containerID="1fa43d1eeb8c60e26dee8d78abfd035753a71d50bc43bc0b9b3bff5eca6e2540" Jan 31 09:23:16 crc kubenswrapper[4732]: I0131 09:23:16.962773 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.052554 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.052653 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-combined-ca-bundle\") pod \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.052787 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift\") pod \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.052826 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sjw2\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-kube-api-access-7sjw2\") pod \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.052855 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-cache\") pod \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.052929 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-lock\") pod \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\" (UID: \"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3\") " Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.053766 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-cache" (OuterVolumeSpecName: "cache") pod "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" (UID: "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.053981 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-lock" (OuterVolumeSpecName: "lock") pod "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" (UID: "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.058641 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" (UID: "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.059120 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-kube-api-access-7sjw2" (OuterVolumeSpecName: "kube-api-access-7sjw2") pod "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" (UID: "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3"). InnerVolumeSpecName "kube-api-access-7sjw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.062748 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" (UID: "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.155224 4732 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.155262 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sjw2\" (UniqueName: \"kubernetes.io/projected/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-kube-api-access-7sjw2\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.155273 4732 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-cache\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.155282 4732 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-lock\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.155310 4732 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.168902 4732 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.256542 4732 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.280462 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" (UID: "ba8f0576-6adb-407c-b8e0-e4b04f0d47e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.358075 4732 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.498304 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.498378 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.498429 4732 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.499149 4732 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"777b6bb11b5556f90e1c2a08822928a50217112bcd9efce47de0d5e1a98e3392"} pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.499206 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" containerID="cri-o://777b6bb11b5556f90e1c2a08822928a50217112bcd9efce47de0d5e1a98e3392" gracePeriod=600 Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.583529 4732 generic.go:334] "Generic (PLEG): container finished" podID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerID="21e962a9a2a9342a7ea4a394d67dccb2bcba098409718fcc20c041b19f8e8aff" exitCode=137 Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.583573 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"21e962a9a2a9342a7ea4a394d67dccb2bcba098409718fcc20c041b19f8e8aff"} Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.583603 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"ba8f0576-6adb-407c-b8e0-e4b04f0d47e3","Type":"ContainerDied","Data":"db289e5d4324b07ffd624072a8984010d2c9a44d1db929557bee6f1a1928fda3"} Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.583624 4732 scope.go:117] "RemoveContainer" containerID="21e962a9a2a9342a7ea4a394d67dccb2bcba098409718fcc20c041b19f8e8aff" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.583825 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.600040 4732 scope.go:117] "RemoveContainer" containerID="5274a5fdf883aabb8e1459ab78b419e45850c93bd25d640d9b574e423e14891c" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.624620 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.633370 4732 scope.go:117] "RemoveContainer" containerID="d2fec506922e973332f3fc9bd8c7f3a11bbfe136d64d26f49002b37bbb05e717" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.638894 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.651185 4732 scope.go:117] "RemoveContainer" containerID="17c4084b4095035a8bb4f76d7014f2798296fac02e063cfc7abffe90d8b7294f" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.672287 4732 scope.go:117] "RemoveContainer" containerID="08aa6d73b6ce55a5fd4af30b26217cd60a1641aca38733b3a337dfbea21a313c" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.734459 4732 scope.go:117] "RemoveContainer" containerID="78c47a8e4097580d0d62fedad615c2e7de129377d57e4dde3ee9b8817b3ebe4f" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.752441 4732 scope.go:117] "RemoveContainer" containerID="a9463f09258cd249e7ac5eeabeac6f271d8661141768b16e79dc29de794bbfee" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.782778 4732 scope.go:117] "RemoveContainer" containerID="92b05acc8700ea245d97120676710a498d03141bf1d8600bf5a37e131c2283ee" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.799511 4732 scope.go:117] "RemoveContainer" containerID="c2c74e7d8a1ebd17ed649b720d8fb589be94684e535f53dd916a2699096a6a03" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.814884 4732 scope.go:117] "RemoveContainer" containerID="4b19ce1044930f4e1d8867e77cdf09526d8f2b3e3900dc1f91071adaf1162e32" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.833229 4732 scope.go:117] "RemoveContainer" containerID="f3b6deecf746ea640b42712bfb03bb8b827bed6b84301b438ba02ffe4845a922" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.847151 4732 scope.go:117] "RemoveContainer" containerID="13d7343d1751fb8940398ae0071750f1f6117ba7b59e7dfa6990eb52a895303e" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.868081 4732 scope.go:117] "RemoveContainer" containerID="3ebca277c4b203371cd9fe26b95bfe03d230345a47fb88a2eff217b2ba6b95b9" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.887445 4732 scope.go:117] "RemoveContainer" containerID="a26623b23e379dae3834fe0d0647fe0c758307c675c5178e21ed5e70d77db572" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.903694 4732 scope.go:117] "RemoveContainer" containerID="cd4637a4eac10985ea1fdfa77841cc57b4b13086ee7a639ec9095e4b5fe6443f" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.923616 4732 scope.go:117] "RemoveContainer" containerID="21e962a9a2a9342a7ea4a394d67dccb2bcba098409718fcc20c041b19f8e8aff" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.923995 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21e962a9a2a9342a7ea4a394d67dccb2bcba098409718fcc20c041b19f8e8aff\": container with ID starting with 21e962a9a2a9342a7ea4a394d67dccb2bcba098409718fcc20c041b19f8e8aff not found: ID does not exist" containerID="21e962a9a2a9342a7ea4a394d67dccb2bcba098409718fcc20c041b19f8e8aff" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.924026 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21e962a9a2a9342a7ea4a394d67dccb2bcba098409718fcc20c041b19f8e8aff"} err="failed to get container status \"21e962a9a2a9342a7ea4a394d67dccb2bcba098409718fcc20c041b19f8e8aff\": rpc error: code = NotFound desc = could not find container \"21e962a9a2a9342a7ea4a394d67dccb2bcba098409718fcc20c041b19f8e8aff\": container with ID starting with 21e962a9a2a9342a7ea4a394d67dccb2bcba098409718fcc20c041b19f8e8aff not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.924048 4732 scope.go:117] "RemoveContainer" containerID="5274a5fdf883aabb8e1459ab78b419e45850c93bd25d640d9b574e423e14891c" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.924262 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5274a5fdf883aabb8e1459ab78b419e45850c93bd25d640d9b574e423e14891c\": container with ID starting with 5274a5fdf883aabb8e1459ab78b419e45850c93bd25d640d9b574e423e14891c not found: ID does not exist" containerID="5274a5fdf883aabb8e1459ab78b419e45850c93bd25d640d9b574e423e14891c" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.924293 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5274a5fdf883aabb8e1459ab78b419e45850c93bd25d640d9b574e423e14891c"} err="failed to get container status \"5274a5fdf883aabb8e1459ab78b419e45850c93bd25d640d9b574e423e14891c\": rpc error: code = NotFound desc = could not find container \"5274a5fdf883aabb8e1459ab78b419e45850c93bd25d640d9b574e423e14891c\": container with ID starting with 5274a5fdf883aabb8e1459ab78b419e45850c93bd25d640d9b574e423e14891c not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.924312 4732 scope.go:117] "RemoveContainer" containerID="d2fec506922e973332f3fc9bd8c7f3a11bbfe136d64d26f49002b37bbb05e717" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.924517 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2fec506922e973332f3fc9bd8c7f3a11bbfe136d64d26f49002b37bbb05e717\": container with ID starting with d2fec506922e973332f3fc9bd8c7f3a11bbfe136d64d26f49002b37bbb05e717 not found: ID does not exist" containerID="d2fec506922e973332f3fc9bd8c7f3a11bbfe136d64d26f49002b37bbb05e717" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.924548 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2fec506922e973332f3fc9bd8c7f3a11bbfe136d64d26f49002b37bbb05e717"} err="failed to get container status \"d2fec506922e973332f3fc9bd8c7f3a11bbfe136d64d26f49002b37bbb05e717\": rpc error: code = NotFound desc = could not find container \"d2fec506922e973332f3fc9bd8c7f3a11bbfe136d64d26f49002b37bbb05e717\": container with ID starting with d2fec506922e973332f3fc9bd8c7f3a11bbfe136d64d26f49002b37bbb05e717 not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.924570 4732 scope.go:117] "RemoveContainer" containerID="17c4084b4095035a8bb4f76d7014f2798296fac02e063cfc7abffe90d8b7294f" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.924761 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17c4084b4095035a8bb4f76d7014f2798296fac02e063cfc7abffe90d8b7294f\": container with ID starting with 17c4084b4095035a8bb4f76d7014f2798296fac02e063cfc7abffe90d8b7294f not found: ID does not exist" containerID="17c4084b4095035a8bb4f76d7014f2798296fac02e063cfc7abffe90d8b7294f" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.924782 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17c4084b4095035a8bb4f76d7014f2798296fac02e063cfc7abffe90d8b7294f"} err="failed to get container status \"17c4084b4095035a8bb4f76d7014f2798296fac02e063cfc7abffe90d8b7294f\": rpc error: code = NotFound desc = could not find container \"17c4084b4095035a8bb4f76d7014f2798296fac02e063cfc7abffe90d8b7294f\": container with ID starting with 17c4084b4095035a8bb4f76d7014f2798296fac02e063cfc7abffe90d8b7294f not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.924795 4732 scope.go:117] "RemoveContainer" containerID="08aa6d73b6ce55a5fd4af30b26217cd60a1641aca38733b3a337dfbea21a313c" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.924962 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08aa6d73b6ce55a5fd4af30b26217cd60a1641aca38733b3a337dfbea21a313c\": container with ID starting with 08aa6d73b6ce55a5fd4af30b26217cd60a1641aca38733b3a337dfbea21a313c not found: ID does not exist" containerID="08aa6d73b6ce55a5fd4af30b26217cd60a1641aca38733b3a337dfbea21a313c" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.924984 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08aa6d73b6ce55a5fd4af30b26217cd60a1641aca38733b3a337dfbea21a313c"} err="failed to get container status \"08aa6d73b6ce55a5fd4af30b26217cd60a1641aca38733b3a337dfbea21a313c\": rpc error: code = NotFound desc = could not find container \"08aa6d73b6ce55a5fd4af30b26217cd60a1641aca38733b3a337dfbea21a313c\": container with ID starting with 08aa6d73b6ce55a5fd4af30b26217cd60a1641aca38733b3a337dfbea21a313c not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.924996 4732 scope.go:117] "RemoveContainer" containerID="78c47a8e4097580d0d62fedad615c2e7de129377d57e4dde3ee9b8817b3ebe4f" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.925148 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78c47a8e4097580d0d62fedad615c2e7de129377d57e4dde3ee9b8817b3ebe4f\": container with ID starting with 78c47a8e4097580d0d62fedad615c2e7de129377d57e4dde3ee9b8817b3ebe4f not found: ID does not exist" containerID="78c47a8e4097580d0d62fedad615c2e7de129377d57e4dde3ee9b8817b3ebe4f" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.925168 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c47a8e4097580d0d62fedad615c2e7de129377d57e4dde3ee9b8817b3ebe4f"} err="failed to get container status \"78c47a8e4097580d0d62fedad615c2e7de129377d57e4dde3ee9b8817b3ebe4f\": rpc error: code = NotFound desc = could not find container \"78c47a8e4097580d0d62fedad615c2e7de129377d57e4dde3ee9b8817b3ebe4f\": container with ID starting with 78c47a8e4097580d0d62fedad615c2e7de129377d57e4dde3ee9b8817b3ebe4f not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.925179 4732 scope.go:117] "RemoveContainer" containerID="a9463f09258cd249e7ac5eeabeac6f271d8661141768b16e79dc29de794bbfee" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.925520 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9463f09258cd249e7ac5eeabeac6f271d8661141768b16e79dc29de794bbfee\": container with ID starting with a9463f09258cd249e7ac5eeabeac6f271d8661141768b16e79dc29de794bbfee not found: ID does not exist" containerID="a9463f09258cd249e7ac5eeabeac6f271d8661141768b16e79dc29de794bbfee" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.925542 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9463f09258cd249e7ac5eeabeac6f271d8661141768b16e79dc29de794bbfee"} err="failed to get container status \"a9463f09258cd249e7ac5eeabeac6f271d8661141768b16e79dc29de794bbfee\": rpc error: code = NotFound desc = could not find container \"a9463f09258cd249e7ac5eeabeac6f271d8661141768b16e79dc29de794bbfee\": container with ID starting with a9463f09258cd249e7ac5eeabeac6f271d8661141768b16e79dc29de794bbfee not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.925555 4732 scope.go:117] "RemoveContainer" containerID="92b05acc8700ea245d97120676710a498d03141bf1d8600bf5a37e131c2283ee" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.925756 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b05acc8700ea245d97120676710a498d03141bf1d8600bf5a37e131c2283ee\": container with ID starting with 92b05acc8700ea245d97120676710a498d03141bf1d8600bf5a37e131c2283ee not found: ID does not exist" containerID="92b05acc8700ea245d97120676710a498d03141bf1d8600bf5a37e131c2283ee" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.925776 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b05acc8700ea245d97120676710a498d03141bf1d8600bf5a37e131c2283ee"} err="failed to get container status \"92b05acc8700ea245d97120676710a498d03141bf1d8600bf5a37e131c2283ee\": rpc error: code = NotFound desc = could not find container \"92b05acc8700ea245d97120676710a498d03141bf1d8600bf5a37e131c2283ee\": container with ID starting with 92b05acc8700ea245d97120676710a498d03141bf1d8600bf5a37e131c2283ee not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.925788 4732 scope.go:117] "RemoveContainer" containerID="c2c74e7d8a1ebd17ed649b720d8fb589be94684e535f53dd916a2699096a6a03" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.925954 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2c74e7d8a1ebd17ed649b720d8fb589be94684e535f53dd916a2699096a6a03\": container with ID starting with c2c74e7d8a1ebd17ed649b720d8fb589be94684e535f53dd916a2699096a6a03 not found: ID does not exist" containerID="c2c74e7d8a1ebd17ed649b720d8fb589be94684e535f53dd916a2699096a6a03" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.925975 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c74e7d8a1ebd17ed649b720d8fb589be94684e535f53dd916a2699096a6a03"} err="failed to get container status \"c2c74e7d8a1ebd17ed649b720d8fb589be94684e535f53dd916a2699096a6a03\": rpc error: code = NotFound desc = could not find container \"c2c74e7d8a1ebd17ed649b720d8fb589be94684e535f53dd916a2699096a6a03\": container with ID starting with c2c74e7d8a1ebd17ed649b720d8fb589be94684e535f53dd916a2699096a6a03 not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.925993 4732 scope.go:117] "RemoveContainer" containerID="4b19ce1044930f4e1d8867e77cdf09526d8f2b3e3900dc1f91071adaf1162e32" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.926161 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b19ce1044930f4e1d8867e77cdf09526d8f2b3e3900dc1f91071adaf1162e32\": container with ID starting with 4b19ce1044930f4e1d8867e77cdf09526d8f2b3e3900dc1f91071adaf1162e32 not found: ID does not exist" containerID="4b19ce1044930f4e1d8867e77cdf09526d8f2b3e3900dc1f91071adaf1162e32" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.926184 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b19ce1044930f4e1d8867e77cdf09526d8f2b3e3900dc1f91071adaf1162e32"} err="failed to get container status \"4b19ce1044930f4e1d8867e77cdf09526d8f2b3e3900dc1f91071adaf1162e32\": rpc error: code = NotFound desc = could not find container \"4b19ce1044930f4e1d8867e77cdf09526d8f2b3e3900dc1f91071adaf1162e32\": container with ID starting with 4b19ce1044930f4e1d8867e77cdf09526d8f2b3e3900dc1f91071adaf1162e32 not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.926199 4732 scope.go:117] "RemoveContainer" containerID="f3b6deecf746ea640b42712bfb03bb8b827bed6b84301b438ba02ffe4845a922" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.926369 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3b6deecf746ea640b42712bfb03bb8b827bed6b84301b438ba02ffe4845a922\": container with ID starting with f3b6deecf746ea640b42712bfb03bb8b827bed6b84301b438ba02ffe4845a922 not found: ID does not exist" containerID="f3b6deecf746ea640b42712bfb03bb8b827bed6b84301b438ba02ffe4845a922" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.926389 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b6deecf746ea640b42712bfb03bb8b827bed6b84301b438ba02ffe4845a922"} err="failed to get container status \"f3b6deecf746ea640b42712bfb03bb8b827bed6b84301b438ba02ffe4845a922\": rpc error: code = NotFound desc = could not find container \"f3b6deecf746ea640b42712bfb03bb8b827bed6b84301b438ba02ffe4845a922\": container with ID starting with f3b6deecf746ea640b42712bfb03bb8b827bed6b84301b438ba02ffe4845a922 not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.926404 4732 scope.go:117] "RemoveContainer" containerID="13d7343d1751fb8940398ae0071750f1f6117ba7b59e7dfa6990eb52a895303e" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.926580 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13d7343d1751fb8940398ae0071750f1f6117ba7b59e7dfa6990eb52a895303e\": container with ID starting with 13d7343d1751fb8940398ae0071750f1f6117ba7b59e7dfa6990eb52a895303e not found: ID does not exist" containerID="13d7343d1751fb8940398ae0071750f1f6117ba7b59e7dfa6990eb52a895303e" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.926599 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13d7343d1751fb8940398ae0071750f1f6117ba7b59e7dfa6990eb52a895303e"} err="failed to get container status \"13d7343d1751fb8940398ae0071750f1f6117ba7b59e7dfa6990eb52a895303e\": rpc error: code = NotFound desc = could not find container \"13d7343d1751fb8940398ae0071750f1f6117ba7b59e7dfa6990eb52a895303e\": container with ID starting with 13d7343d1751fb8940398ae0071750f1f6117ba7b59e7dfa6990eb52a895303e not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.926611 4732 scope.go:117] "RemoveContainer" containerID="3ebca277c4b203371cd9fe26b95bfe03d230345a47fb88a2eff217b2ba6b95b9" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.926777 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ebca277c4b203371cd9fe26b95bfe03d230345a47fb88a2eff217b2ba6b95b9\": container with ID starting with 3ebca277c4b203371cd9fe26b95bfe03d230345a47fb88a2eff217b2ba6b95b9 not found: ID does not exist" containerID="3ebca277c4b203371cd9fe26b95bfe03d230345a47fb88a2eff217b2ba6b95b9" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.926799 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ebca277c4b203371cd9fe26b95bfe03d230345a47fb88a2eff217b2ba6b95b9"} err="failed to get container status \"3ebca277c4b203371cd9fe26b95bfe03d230345a47fb88a2eff217b2ba6b95b9\": rpc error: code = NotFound desc = could not find container \"3ebca277c4b203371cd9fe26b95bfe03d230345a47fb88a2eff217b2ba6b95b9\": container with ID starting with 3ebca277c4b203371cd9fe26b95bfe03d230345a47fb88a2eff217b2ba6b95b9 not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.926813 4732 scope.go:117] "RemoveContainer" containerID="a26623b23e379dae3834fe0d0647fe0c758307c675c5178e21ed5e70d77db572" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.927006 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a26623b23e379dae3834fe0d0647fe0c758307c675c5178e21ed5e70d77db572\": container with ID starting with a26623b23e379dae3834fe0d0647fe0c758307c675c5178e21ed5e70d77db572 not found: ID does not exist" containerID="a26623b23e379dae3834fe0d0647fe0c758307c675c5178e21ed5e70d77db572" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.927023 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a26623b23e379dae3834fe0d0647fe0c758307c675c5178e21ed5e70d77db572"} err="failed to get container status \"a26623b23e379dae3834fe0d0647fe0c758307c675c5178e21ed5e70d77db572\": rpc error: code = NotFound desc = could not find container \"a26623b23e379dae3834fe0d0647fe0c758307c675c5178e21ed5e70d77db572\": container with ID starting with a26623b23e379dae3834fe0d0647fe0c758307c675c5178e21ed5e70d77db572 not found: ID does not exist" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.927037 4732 scope.go:117] "RemoveContainer" containerID="cd4637a4eac10985ea1fdfa77841cc57b4b13086ee7a639ec9095e4b5fe6443f" Jan 31 09:23:17 crc kubenswrapper[4732]: E0131 09:23:17.927216 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd4637a4eac10985ea1fdfa77841cc57b4b13086ee7a639ec9095e4b5fe6443f\": container with ID starting with cd4637a4eac10985ea1fdfa77841cc57b4b13086ee7a639ec9095e4b5fe6443f not found: ID does not exist" containerID="cd4637a4eac10985ea1fdfa77841cc57b4b13086ee7a639ec9095e4b5fe6443f" Jan 31 09:23:17 crc kubenswrapper[4732]: I0131 09:23:17.927249 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd4637a4eac10985ea1fdfa77841cc57b4b13086ee7a639ec9095e4b5fe6443f"} err="failed to get container status \"cd4637a4eac10985ea1fdfa77841cc57b4b13086ee7a639ec9095e4b5fe6443f\": rpc error: code = NotFound desc = could not find container \"cd4637a4eac10985ea1fdfa77841cc57b4b13086ee7a639ec9095e4b5fe6443f\": container with ID starting with cd4637a4eac10985ea1fdfa77841cc57b4b13086ee7a639ec9095e4b5fe6443f not found: ID does not exist" Jan 31 09:23:18 crc kubenswrapper[4732]: I0131 09:23:18.555105 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" path="/var/lib/kubelet/pods/ba8f0576-6adb-407c-b8e0-e4b04f0d47e3/volumes" Jan 31 09:23:18 crc kubenswrapper[4732]: I0131 09:23:18.597515 4732 generic.go:334] "Generic (PLEG): container finished" podID="7d790207-d357-4b47-87bf-5b505e061820" containerID="777b6bb11b5556f90e1c2a08822928a50217112bcd9efce47de0d5e1a98e3392" exitCode=0 Jan 31 09:23:18 crc kubenswrapper[4732]: I0131 09:23:18.597564 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerDied","Data":"777b6bb11b5556f90e1c2a08822928a50217112bcd9efce47de0d5e1a98e3392"} Jan 31 09:23:18 crc kubenswrapper[4732]: I0131 09:23:18.597599 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerStarted","Data":"3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77"} Jan 31 09:23:18 crc kubenswrapper[4732]: I0131 09:23:18.597620 4732 scope.go:117] "RemoveContainer" containerID="99271a603de3a603b9be8d8f0bb791de0f202646de27403a5e3efc59790f637f" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.067677 4732 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.231:50112->38.129.56.231:32957: write tcp 38.129.56.231:50112->38.129.56.231:32957: write: broken pipe Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.547574 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-482fl"] Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.560123 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-482fl"] Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602446 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbicanf734-account-delete-g264z"] Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602717 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-auditor" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602728 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-auditor" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602746 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="rsync" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602756 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="rsync" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602766 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-reaper" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602773 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-reaper" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602781 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-updater" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602788 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-updater" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602799 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-replicator" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602807 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-replicator" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602817 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-server" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602822 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-server" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602832 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-expirer" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602838 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-expirer" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602845 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdedfde8-2a77-4328-8d12-1ed7e7c383d7" containerName="proxy-httpd" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602852 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdedfde8-2a77-4328-8d12-1ed7e7c383d7" containerName="proxy-httpd" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602864 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-updater" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602869 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-updater" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602882 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-server" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602887 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-server" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602897 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-replicator" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602903 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-replicator" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602909 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-auditor" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602915 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-auditor" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602926 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-auditor" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602931 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-auditor" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602940 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdedfde8-2a77-4328-8d12-1ed7e7c383d7" containerName="proxy-server" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602946 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdedfde8-2a77-4328-8d12-1ed7e7c383d7" containerName="proxy-server" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602953 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-replicator" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602959 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-replicator" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602968 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="swift-recon-cron" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602974 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="swift-recon-cron" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602984 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-server" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.602990 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-server" Jan 31 09:23:23 crc kubenswrapper[4732]: E0131 09:23:23.602997 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84cedd57-5030-425a-8567-ceeda6aa0109" containerName="swift-ring-rebalance" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603003 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="84cedd57-5030-425a-8567-ceeda6aa0109" containerName="swift-ring-rebalance" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603113 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-server" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603127 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="84cedd57-5030-425a-8567-ceeda6aa0109" containerName="swift-ring-rebalance" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603134 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-replicator" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603141 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-server" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603153 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdedfde8-2a77-4328-8d12-1ed7e7c383d7" containerName="proxy-httpd" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603161 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdedfde8-2a77-4328-8d12-1ed7e7c383d7" containerName="proxy-server" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603171 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-auditor" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603178 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-server" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603186 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-replicator" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603193 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-auditor" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603200 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="account-reaper" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603207 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-updater" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603215 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-auditor" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603223 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="container-updater" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603230 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="rsync" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603238 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-replicator" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603245 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="object-expirer" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603255 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8f0576-6adb-407c-b8e0-e4b04f0d47e3" containerName="swift-recon-cron" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.603714 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbicanf734-account-delete-g264z" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.611220 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbicanf734-account-delete-g264z"] Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.622646 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-worker-597f49d4f-8nh24"] Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.622950 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" podUID="d3b1cc40-9985-45d8-bb06-0676ff188c6c" containerName="barbican-worker-log" containerID="cri-o://61654a37d88891f02e083badeb8b5cfe93229dadee0e8b7d4de9237b19518962" gracePeriod=30 Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.623087 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" podUID="d3b1cc40-9985-45d8-bb06-0676ff188c6c" containerName="barbican-worker" containerID="cri-o://9bf2a11eb61aee168266496fbc63cf87fd0863d3de9bd60d727904eb9b9f54bf" gracePeriod=30 Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.665645 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5d550c8-968a-4962-9e23-c0c22911913d-operator-scripts\") pod \"barbicanf734-account-delete-g264z\" (UID: \"e5d550c8-968a-4962-9e23-c0c22911913d\") " pod="swift-kuttl-tests/barbicanf734-account-delete-g264z" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.666251 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv56f\" (UniqueName: \"kubernetes.io/projected/e5d550c8-968a-4962-9e23-c0c22911913d-kube-api-access-nv56f\") pod \"barbicanf734-account-delete-g264z\" (UID: \"e5d550c8-968a-4962-9e23-c0c22911913d\") " pod="swift-kuttl-tests/barbicanf734-account-delete-g264z" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.669481 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq"] Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.669828 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" podUID="4981d9a9-898f-49ff-809d-58c7ca3bd2a3" containerName="barbican-api-log" containerID="cri-o://2d6ada7bb6e0aa8ec000e73dc78802992d92ff017f59a57893472c54d628fcd1" gracePeriod=30 Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.670148 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" podUID="4981d9a9-898f-49ff-809d-58c7ca3bd2a3" containerName="barbican-api" containerID="cri-o://341aa4e395d7929014e446a8184c4ab222c0434acdf71dbf1596f5fc5c67f4a8" gracePeriod=30 Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.689204 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8"] Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.689481 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" podUID="7728f3b2-7258-444d-982b-10d416bb61f0" containerName="barbican-keystone-listener-log" containerID="cri-o://d4d07af4bd1a6391d336e9eb08f85f38b781a53ac9cbe6e99069d22ae367b8eb" gracePeriod=30 Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.690028 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" podUID="7728f3b2-7258-444d-982b-10d416bb61f0" containerName="barbican-keystone-listener" containerID="cri-o://e43f0104f4b5c35da143cfe8b1821aeb80b3326882a44cc53de6938465a118a2" gracePeriod=30 Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.769340 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv56f\" (UniqueName: \"kubernetes.io/projected/e5d550c8-968a-4962-9e23-c0c22911913d-kube-api-access-nv56f\") pod \"barbicanf734-account-delete-g264z\" (UID: \"e5d550c8-968a-4962-9e23-c0c22911913d\") " pod="swift-kuttl-tests/barbicanf734-account-delete-g264z" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.769476 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5d550c8-968a-4962-9e23-c0c22911913d-operator-scripts\") pod \"barbicanf734-account-delete-g264z\" (UID: \"e5d550c8-968a-4962-9e23-c0c22911913d\") " pod="swift-kuttl-tests/barbicanf734-account-delete-g264z" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.771140 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5d550c8-968a-4962-9e23-c0c22911913d-operator-scripts\") pod \"barbicanf734-account-delete-g264z\" (UID: \"e5d550c8-968a-4962-9e23-c0c22911913d\") " pod="swift-kuttl-tests/barbicanf734-account-delete-g264z" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.809388 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv56f\" (UniqueName: \"kubernetes.io/projected/e5d550c8-968a-4962-9e23-c0c22911913d-kube-api-access-nv56f\") pod \"barbicanf734-account-delete-g264z\" (UID: \"e5d550c8-968a-4962-9e23-c0c22911913d\") " pod="swift-kuttl-tests/barbicanf734-account-delete-g264z" Jan 31 09:23:23 crc kubenswrapper[4732]: I0131 09:23:23.932903 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbicanf734-account-delete-g264z" Jan 31 09:23:24 crc kubenswrapper[4732]: I0131 09:23:24.332473 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbicanf734-account-delete-g264z"] Jan 31 09:23:24 crc kubenswrapper[4732]: I0131 09:23:24.550088 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52cae0eb-413d-4365-a717-8039a3e3b99f" path="/var/lib/kubelet/pods/52cae0eb-413d-4365-a717-8039a3e3b99f/volumes" Jan 31 09:23:24 crc kubenswrapper[4732]: I0131 09:23:24.653277 4732 generic.go:334] "Generic (PLEG): container finished" podID="4981d9a9-898f-49ff-809d-58c7ca3bd2a3" containerID="2d6ada7bb6e0aa8ec000e73dc78802992d92ff017f59a57893472c54d628fcd1" exitCode=143 Jan 31 09:23:24 crc kubenswrapper[4732]: I0131 09:23:24.653350 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" event={"ID":"4981d9a9-898f-49ff-809d-58c7ca3bd2a3","Type":"ContainerDied","Data":"2d6ada7bb6e0aa8ec000e73dc78802992d92ff017f59a57893472c54d628fcd1"} Jan 31 09:23:24 crc kubenswrapper[4732]: I0131 09:23:24.654527 4732 generic.go:334] "Generic (PLEG): container finished" podID="e5d550c8-968a-4962-9e23-c0c22911913d" containerID="def72532b530c78803f0cccd8c5a2d65a616e430cf1d92043ad3623133222585" exitCode=0 Jan 31 09:23:24 crc kubenswrapper[4732]: I0131 09:23:24.654598 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbicanf734-account-delete-g264z" event={"ID":"e5d550c8-968a-4962-9e23-c0c22911913d","Type":"ContainerDied","Data":"def72532b530c78803f0cccd8c5a2d65a616e430cf1d92043ad3623133222585"} Jan 31 09:23:24 crc kubenswrapper[4732]: I0131 09:23:24.654629 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbicanf734-account-delete-g264z" event={"ID":"e5d550c8-968a-4962-9e23-c0c22911913d","Type":"ContainerStarted","Data":"146893e4283a7dc87ac374f210d6e9d776b02509f4ed67945e0b2dcbb18bfb2f"} Jan 31 09:23:24 crc kubenswrapper[4732]: I0131 09:23:24.656643 4732 generic.go:334] "Generic (PLEG): container finished" podID="d3b1cc40-9985-45d8-bb06-0676ff188c6c" containerID="61654a37d88891f02e083badeb8b5cfe93229dadee0e8b7d4de9237b19518962" exitCode=143 Jan 31 09:23:24 crc kubenswrapper[4732]: I0131 09:23:24.656690 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" event={"ID":"d3b1cc40-9985-45d8-bb06-0676ff188c6c","Type":"ContainerDied","Data":"61654a37d88891f02e083badeb8b5cfe93229dadee0e8b7d4de9237b19518962"} Jan 31 09:23:24 crc kubenswrapper[4732]: I0131 09:23:24.658098 4732 generic.go:334] "Generic (PLEG): container finished" podID="7728f3b2-7258-444d-982b-10d416bb61f0" containerID="d4d07af4bd1a6391d336e9eb08f85f38b781a53ac9cbe6e99069d22ae367b8eb" exitCode=143 Jan 31 09:23:24 crc kubenswrapper[4732]: I0131 09:23:24.658123 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" event={"ID":"7728f3b2-7258-444d-982b-10d416bb61f0","Type":"ContainerDied","Data":"d4d07af4bd1a6391d336e9eb08f85f38b781a53ac9cbe6e99069d22ae367b8eb"} Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.015339 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-r757b"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.022169 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-vj5z4"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.032063 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-vj5z4"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.050439 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-r757b"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.063335 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-7959cb4f8b-4vxsg"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.063530 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" podUID="1ee84530-efd7-4d83-9aa2-fb9b8b178496" containerName="keystone-api" containerID="cri-o://2242e86748f2123e7c2371fd3207440c5237f2bf59a11e74432d9fe62c372932" gracePeriod=30 Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.080894 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone5b86-account-delete-qkt4p"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.081693 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.087061 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssz6z\" (UniqueName: \"kubernetes.io/projected/1654407d-7276-4839-839d-1244759c4ad2-kube-api-access-ssz6z\") pod \"keystone5b86-account-delete-qkt4p\" (UID: \"1654407d-7276-4839-839d-1244759c4ad2\") " pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.087108 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts\") pod \"keystone5b86-account-delete-qkt4p\" (UID: \"1654407d-7276-4839-839d-1244759c4ad2\") " pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.093141 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone5b86-account-delete-qkt4p"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.187995 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssz6z\" (UniqueName: \"kubernetes.io/projected/1654407d-7276-4839-839d-1244759c4ad2-kube-api-access-ssz6z\") pod \"keystone5b86-account-delete-qkt4p\" (UID: \"1654407d-7276-4839-839d-1244759c4ad2\") " pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.188045 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts\") pod \"keystone5b86-account-delete-qkt4p\" (UID: \"1654407d-7276-4839-839d-1244759c4ad2\") " pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.188805 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts\") pod \"keystone5b86-account-delete-qkt4p\" (UID: \"1654407d-7276-4839-839d-1244759c4ad2\") " pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.204945 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.209483 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssz6z\" (UniqueName: \"kubernetes.io/projected/1654407d-7276-4839-839d-1244759c4ad2-kube-api-access-ssz6z\") pod \"keystone5b86-account-delete-qkt4p\" (UID: \"1654407d-7276-4839-839d-1244759c4ad2\") " pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.389375 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7728f3b2-7258-444d-982b-10d416bb61f0-logs\") pod \"7728f3b2-7258-444d-982b-10d416bb61f0\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.389480 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7728f3b2-7258-444d-982b-10d416bb61f0-config-data-custom\") pod \"7728f3b2-7258-444d-982b-10d416bb61f0\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.389553 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bkhk\" (UniqueName: \"kubernetes.io/projected/7728f3b2-7258-444d-982b-10d416bb61f0-kube-api-access-6bkhk\") pod \"7728f3b2-7258-444d-982b-10d416bb61f0\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.389577 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7728f3b2-7258-444d-982b-10d416bb61f0-config-data\") pod \"7728f3b2-7258-444d-982b-10d416bb61f0\" (UID: \"7728f3b2-7258-444d-982b-10d416bb61f0\") " Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.390015 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7728f3b2-7258-444d-982b-10d416bb61f0-logs" (OuterVolumeSpecName: "logs") pod "7728f3b2-7258-444d-982b-10d416bb61f0" (UID: "7728f3b2-7258-444d-982b-10d416bb61f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.392839 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7728f3b2-7258-444d-982b-10d416bb61f0-kube-api-access-6bkhk" (OuterVolumeSpecName: "kube-api-access-6bkhk") pod "7728f3b2-7258-444d-982b-10d416bb61f0" (UID: "7728f3b2-7258-444d-982b-10d416bb61f0"). InnerVolumeSpecName "kube-api-access-6bkhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.393261 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7728f3b2-7258-444d-982b-10d416bb61f0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7728f3b2-7258-444d-982b-10d416bb61f0" (UID: "7728f3b2-7258-444d-982b-10d416bb61f0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.405865 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.442429 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7728f3b2-7258-444d-982b-10d416bb61f0-config-data" (OuterVolumeSpecName: "config-data") pod "7728f3b2-7258-444d-982b-10d416bb61f0" (UID: "7728f3b2-7258-444d-982b-10d416bb61f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.490746 4732 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7728f3b2-7258-444d-982b-10d416bb61f0-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.490769 4732 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7728f3b2-7258-444d-982b-10d416bb61f0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.490778 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bkhk\" (UniqueName: \"kubernetes.io/projected/7728f3b2-7258-444d-982b-10d416bb61f0-kube-api-access-6bkhk\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.490786 4732 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7728f3b2-7258-444d-982b-10d416bb61f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.657876 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone5b86-account-delete-qkt4p"] Jan 31 09:23:25 crc kubenswrapper[4732]: W0131 09:23:25.662186 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1654407d_7276_4839_839d_1244759c4ad2.slice/crio-7d6e2907463735b69f40b7370fe0069cdb743cf23d1f9aeb0278bee0ffa6f8b0 WatchSource:0}: Error finding container 7d6e2907463735b69f40b7370fe0069cdb743cf23d1f9aeb0278bee0ffa6f8b0: Status 404 returned error can't find the container with id 7d6e2907463735b69f40b7370fe0069cdb743cf23d1f9aeb0278bee0ffa6f8b0 Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.666213 4732 generic.go:334] "Generic (PLEG): container finished" podID="7728f3b2-7258-444d-982b-10d416bb61f0" containerID="e43f0104f4b5c35da143cfe8b1821aeb80b3326882a44cc53de6938465a118a2" exitCode=0 Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.666289 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" event={"ID":"7728f3b2-7258-444d-982b-10d416bb61f0","Type":"ContainerDied","Data":"e43f0104f4b5c35da143cfe8b1821aeb80b3326882a44cc53de6938465a118a2"} Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.666328 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" event={"ID":"7728f3b2-7258-444d-982b-10d416bb61f0","Type":"ContainerDied","Data":"5117dba79725a077cd6ab2eb2327b8a7f198da1d9f337e252448199e446feb3a"} Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.666348 4732 scope.go:117] "RemoveContainer" containerID="e43f0104f4b5c35da143cfe8b1821aeb80b3326882a44cc53de6938465a118a2" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.666424 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.691027 4732 scope.go:117] "RemoveContainer" containerID="d4d07af4bd1a6391d336e9eb08f85f38b781a53ac9cbe6e99069d22ae367b8eb" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.705367 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.712723 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5f5b7fdb46-c8wb8"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.726306 4732 scope.go:117] "RemoveContainer" containerID="e43f0104f4b5c35da143cfe8b1821aeb80b3326882a44cc53de6938465a118a2" Jan 31 09:23:25 crc kubenswrapper[4732]: E0131 09:23:25.726830 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e43f0104f4b5c35da143cfe8b1821aeb80b3326882a44cc53de6938465a118a2\": container with ID starting with e43f0104f4b5c35da143cfe8b1821aeb80b3326882a44cc53de6938465a118a2 not found: ID does not exist" containerID="e43f0104f4b5c35da143cfe8b1821aeb80b3326882a44cc53de6938465a118a2" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.726877 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e43f0104f4b5c35da143cfe8b1821aeb80b3326882a44cc53de6938465a118a2"} err="failed to get container status \"e43f0104f4b5c35da143cfe8b1821aeb80b3326882a44cc53de6938465a118a2\": rpc error: code = NotFound desc = could not find container \"e43f0104f4b5c35da143cfe8b1821aeb80b3326882a44cc53de6938465a118a2\": container with ID starting with e43f0104f4b5c35da143cfe8b1821aeb80b3326882a44cc53de6938465a118a2 not found: ID does not exist" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.726904 4732 scope.go:117] "RemoveContainer" containerID="d4d07af4bd1a6391d336e9eb08f85f38b781a53ac9cbe6e99069d22ae367b8eb" Jan 31 09:23:25 crc kubenswrapper[4732]: E0131 09:23:25.730172 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4d07af4bd1a6391d336e9eb08f85f38b781a53ac9cbe6e99069d22ae367b8eb\": container with ID starting with d4d07af4bd1a6391d336e9eb08f85f38b781a53ac9cbe6e99069d22ae367b8eb not found: ID does not exist" containerID="d4d07af4bd1a6391d336e9eb08f85f38b781a53ac9cbe6e99069d22ae367b8eb" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.730234 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4d07af4bd1a6391d336e9eb08f85f38b781a53ac9cbe6e99069d22ae367b8eb"} err="failed to get container status \"d4d07af4bd1a6391d336e9eb08f85f38b781a53ac9cbe6e99069d22ae367b8eb\": rpc error: code = NotFound desc = could not find container \"d4d07af4bd1a6391d336e9eb08f85f38b781a53ac9cbe6e99069d22ae367b8eb\": container with ID starting with d4d07af4bd1a6391d336e9eb08f85f38b781a53ac9cbe6e99069d22ae367b8eb not found: ID does not exist" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.810414 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/root-account-create-update-pjmfd"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.821407 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/root-account-create-update-pjmfd"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.860815 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/root-account-create-update-x22v8"] Jan 31 09:23:25 crc kubenswrapper[4732]: E0131 09:23:25.861199 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7728f3b2-7258-444d-982b-10d416bb61f0" containerName="barbican-keystone-listener" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.861214 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="7728f3b2-7258-444d-982b-10d416bb61f0" containerName="barbican-keystone-listener" Jan 31 09:23:25 crc kubenswrapper[4732]: E0131 09:23:25.861230 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7728f3b2-7258-444d-982b-10d416bb61f0" containerName="barbican-keystone-listener-log" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.861238 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="7728f3b2-7258-444d-982b-10d416bb61f0" containerName="barbican-keystone-listener-log" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.861410 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="7728f3b2-7258-444d-982b-10d416bb61f0" containerName="barbican-keystone-listener-log" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.861424 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="7728f3b2-7258-444d-982b-10d416bb61f0" containerName="barbican-keystone-listener" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.862009 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-x22v8" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.864012 4732 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.869732 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.878143 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.896259 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-x22v8"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.901591 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.933486 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/root-account-create-update-x22v8"] Jan 31 09:23:25 crc kubenswrapper[4732]: E0131 09:23:25.933968 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-4wpw4 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="swift-kuttl-tests/root-account-create-update-x22v8" podUID="17f70688-a1f8-4465-821b-48f5381ff96c" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.979855 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbicanf734-account-delete-g264z" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.997364 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wpw4\" (UniqueName: \"kubernetes.io/projected/17f70688-a1f8-4465-821b-48f5381ff96c-kube-api-access-4wpw4\") pod \"root-account-create-update-x22v8\" (UID: \"17f70688-a1f8-4465-821b-48f5381ff96c\") " pod="swift-kuttl-tests/root-account-create-update-x22v8" Jan 31 09:23:25 crc kubenswrapper[4732]: I0131 09:23:25.997423 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17f70688-a1f8-4465-821b-48f5381ff96c-operator-scripts\") pod \"root-account-create-update-x22v8\" (UID: \"17f70688-a1f8-4465-821b-48f5381ff96c\") " pod="swift-kuttl-tests/root-account-create-update-x22v8" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.050362 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/openstack-galera-2" podUID="f7eb0179-b292-4a09-a07d-3d9bfe7978f3" containerName="galera" containerID="cri-o://bf4dd15bf1d89a707a1cc67b69c360511a62db3a9d76bcd538b729b470197a50" gracePeriod=30 Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.098502 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv56f\" (UniqueName: \"kubernetes.io/projected/e5d550c8-968a-4962-9e23-c0c22911913d-kube-api-access-nv56f\") pod \"e5d550c8-968a-4962-9e23-c0c22911913d\" (UID: \"e5d550c8-968a-4962-9e23-c0c22911913d\") " Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.098692 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5d550c8-968a-4962-9e23-c0c22911913d-operator-scripts\") pod \"e5d550c8-968a-4962-9e23-c0c22911913d\" (UID: \"e5d550c8-968a-4962-9e23-c0c22911913d\") " Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.099024 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wpw4\" (UniqueName: \"kubernetes.io/projected/17f70688-a1f8-4465-821b-48f5381ff96c-kube-api-access-4wpw4\") pod \"root-account-create-update-x22v8\" (UID: \"17f70688-a1f8-4465-821b-48f5381ff96c\") " pod="swift-kuttl-tests/root-account-create-update-x22v8" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.099065 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17f70688-a1f8-4465-821b-48f5381ff96c-operator-scripts\") pod \"root-account-create-update-x22v8\" (UID: \"17f70688-a1f8-4465-821b-48f5381ff96c\") " pod="swift-kuttl-tests/root-account-create-update-x22v8" Jan 31 09:23:26 crc kubenswrapper[4732]: E0131 09:23:26.099188 4732 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 09:23:26 crc kubenswrapper[4732]: E0131 09:23:26.099258 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17f70688-a1f8-4465-821b-48f5381ff96c-operator-scripts podName:17f70688-a1f8-4465-821b-48f5381ff96c nodeName:}" failed. No retries permitted until 2026-01-31 09:23:26.599240079 +0000 UTC m=+1344.905116283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/17f70688-a1f8-4465-821b-48f5381ff96c-operator-scripts") pod "root-account-create-update-x22v8" (UID: "17f70688-a1f8-4465-821b-48f5381ff96c") : configmap "openstack-scripts" not found Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.099281 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d550c8-968a-4962-9e23-c0c22911913d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5d550c8-968a-4962-9e23-c0c22911913d" (UID: "e5d550c8-968a-4962-9e23-c0c22911913d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:26 crc kubenswrapper[4732]: E0131 09:23:26.105046 4732 projected.go:194] Error preparing data for projected volume kube-api-access-4wpw4 for pod swift-kuttl-tests/root-account-create-update-x22v8: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 09:23:26 crc kubenswrapper[4732]: E0131 09:23:26.105129 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17f70688-a1f8-4465-821b-48f5381ff96c-kube-api-access-4wpw4 podName:17f70688-a1f8-4465-821b-48f5381ff96c nodeName:}" failed. No retries permitted until 2026-01-31 09:23:26.605106761 +0000 UTC m=+1344.910983025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4wpw4" (UniqueName: "kubernetes.io/projected/17f70688-a1f8-4465-821b-48f5381ff96c-kube-api-access-4wpw4") pod "root-account-create-update-x22v8" (UID: "17f70688-a1f8-4465-821b-48f5381ff96c") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.110906 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d550c8-968a-4962-9e23-c0c22911913d-kube-api-access-nv56f" (OuterVolumeSpecName: "kube-api-access-nv56f") pod "e5d550c8-968a-4962-9e23-c0c22911913d" (UID: "e5d550c8-968a-4962-9e23-c0c22911913d"). InnerVolumeSpecName "kube-api-access-nv56f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.200819 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv56f\" (UniqueName: \"kubernetes.io/projected/e5d550c8-968a-4962-9e23-c0c22911913d-kube-api-access-nv56f\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.200850 4732 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5d550c8-968a-4962-9e23-c0c22911913d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.410523 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.410776 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/memcached-0" podUID="c0d4fa62-a33c-4ab2-a446-697994c1541e" containerName="memcached" containerID="cri-o://3bead2528c2ca5acd2f850812b4f5c0dc70486b1bf295a4a4a607b3a99c2b59e" gracePeriod=30 Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.549577 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1568a5da-d308-4b7e-94b6-99c846371cb8" path="/var/lib/kubelet/pods/1568a5da-d308-4b7e-94b6-99c846371cb8/volumes" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.550496 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d8a630-8f89-44aa-9f24-2f1b279cccfd" path="/var/lib/kubelet/pods/51d8a630-8f89-44aa-9f24-2f1b279cccfd/volumes" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.551005 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65a47d3d-88f0-48b4-b672-9b224ead785f" path="/var/lib/kubelet/pods/65a47d3d-88f0-48b4-b672-9b224ead785f/volumes" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.551981 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7728f3b2-7258-444d-982b-10d416bb61f0" path="/var/lib/kubelet/pods/7728f3b2-7258-444d-982b-10d416bb61f0/volumes" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.605984 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wpw4\" (UniqueName: \"kubernetes.io/projected/17f70688-a1f8-4465-821b-48f5381ff96c-kube-api-access-4wpw4\") pod \"root-account-create-update-x22v8\" (UID: \"17f70688-a1f8-4465-821b-48f5381ff96c\") " pod="swift-kuttl-tests/root-account-create-update-x22v8" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.606044 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17f70688-a1f8-4465-821b-48f5381ff96c-operator-scripts\") pod \"root-account-create-update-x22v8\" (UID: \"17f70688-a1f8-4465-821b-48f5381ff96c\") " pod="swift-kuttl-tests/root-account-create-update-x22v8" Jan 31 09:23:26 crc kubenswrapper[4732]: E0131 09:23:26.606181 4732 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 09:23:26 crc kubenswrapper[4732]: E0131 09:23:26.606236 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17f70688-a1f8-4465-821b-48f5381ff96c-operator-scripts podName:17f70688-a1f8-4465-821b-48f5381ff96c nodeName:}" failed. No retries permitted until 2026-01-31 09:23:27.606217918 +0000 UTC m=+1345.912094122 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/17f70688-a1f8-4465-821b-48f5381ff96c-operator-scripts") pod "root-account-create-update-x22v8" (UID: "17f70688-a1f8-4465-821b-48f5381ff96c") : configmap "openstack-scripts" not found Jan 31 09:23:26 crc kubenswrapper[4732]: E0131 09:23:26.609082 4732 projected.go:194] Error preparing data for projected volume kube-api-access-4wpw4 for pod swift-kuttl-tests/root-account-create-update-x22v8: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 09:23:26 crc kubenswrapper[4732]: E0131 09:23:26.609135 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17f70688-a1f8-4465-821b-48f5381ff96c-kube-api-access-4wpw4 podName:17f70688-a1f8-4465-821b-48f5381ff96c nodeName:}" failed. No retries permitted until 2026-01-31 09:23:27.609122918 +0000 UTC m=+1345.914999122 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4wpw4" (UniqueName: "kubernetes.io/projected/17f70688-a1f8-4465-821b-48f5381ff96c-kube-api-access-4wpw4") pod "root-account-create-update-x22v8" (UID: "17f70688-a1f8-4465-821b-48f5381ff96c") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.680153 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" event={"ID":"1654407d-7276-4839-839d-1244759c4ad2","Type":"ContainerStarted","Data":"79fac5d2960d6f9193efd13767f68129541f3654fafcc49472d1b08351983069"} Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.680197 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" event={"ID":"1654407d-7276-4839-839d-1244759c4ad2","Type":"ContainerStarted","Data":"7d6e2907463735b69f40b7370fe0069cdb743cf23d1f9aeb0278bee0ffa6f8b0"} Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.680535 4732 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" secret="" err="secret \"galera-openstack-dockercfg-4btb4\" not found" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.686007 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbicanf734-account-delete-g264z" event={"ID":"e5d550c8-968a-4962-9e23-c0c22911913d","Type":"ContainerDied","Data":"146893e4283a7dc87ac374f210d6e9d776b02509f4ed67945e0b2dcbb18bfb2f"} Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.686050 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="146893e4283a7dc87ac374f210d6e9d776b02509f4ed67945e0b2dcbb18bfb2f" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.686116 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbicanf734-account-delete-g264z" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.688120 4732 generic.go:334] "Generic (PLEG): container finished" podID="f7eb0179-b292-4a09-a07d-3d9bfe7978f3" containerID="bf4dd15bf1d89a707a1cc67b69c360511a62db3a9d76bcd538b729b470197a50" exitCode=0 Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.688194 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f7eb0179-b292-4a09-a07d-3d9bfe7978f3","Type":"ContainerDied","Data":"bf4dd15bf1d89a707a1cc67b69c360511a62db3a9d76bcd538b729b470197a50"} Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.689221 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-x22v8" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.699263 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" podStartSLOduration=1.6992430600000001 podStartE2EDuration="1.69924306s" podCreationTimestamp="2026-01-31 09:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:23:26.694269356 +0000 UTC m=+1345.000145560" watchObservedRunningTime="2026-01-31 09:23:26.69924306 +0000 UTC m=+1345.005119274" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.699686 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-x22v8" Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.785093 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 31 09:23:26 crc kubenswrapper[4732]: E0131 09:23:26.810155 4732 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 09:23:26 crc kubenswrapper[4732]: E0131 09:23:26.810225 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts podName:1654407d-7276-4839-839d-1244759c4ad2 nodeName:}" failed. No retries permitted until 2026-01-31 09:23:27.310209338 +0000 UTC m=+1345.616085542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts") pod "keystone5b86-account-delete-qkt4p" (UID: "1654407d-7276-4839-839d-1244759c4ad2") : configmap "openstack-scripts" not found Jan 31 09:23:26 crc kubenswrapper[4732]: I0131 09:23:26.936577 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.011428 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.011868 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-operator-scripts\") pod \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.011995 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjncs\" (UniqueName: \"kubernetes.io/projected/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-kube-api-access-gjncs\") pod \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.012046 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-config-data-generated\") pod \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.012097 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-kolla-config\") pod \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.012126 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-config-data-default\") pod \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\" (UID: \"f7eb0179-b292-4a09-a07d-3d9bfe7978f3\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.012850 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "f7eb0179-b292-4a09-a07d-3d9bfe7978f3" (UID: "f7eb0179-b292-4a09-a07d-3d9bfe7978f3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.012977 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "f7eb0179-b292-4a09-a07d-3d9bfe7978f3" (UID: "f7eb0179-b292-4a09-a07d-3d9bfe7978f3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.013388 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "f7eb0179-b292-4a09-a07d-3d9bfe7978f3" (UID: "f7eb0179-b292-4a09-a07d-3d9bfe7978f3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.013791 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7eb0179-b292-4a09-a07d-3d9bfe7978f3" (UID: "f7eb0179-b292-4a09-a07d-3d9bfe7978f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.018571 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-kube-api-access-gjncs" (OuterVolumeSpecName: "kube-api-access-gjncs") pod "f7eb0179-b292-4a09-a07d-3d9bfe7978f3" (UID: "f7eb0179-b292-4a09-a07d-3d9bfe7978f3"). InnerVolumeSpecName "kube-api-access-gjncs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.022484 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "f7eb0179-b292-4a09-a07d-3d9bfe7978f3" (UID: "f7eb0179-b292-4a09-a07d-3d9bfe7978f3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.113893 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjncs\" (UniqueName: \"kubernetes.io/projected/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-kube-api-access-gjncs\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.113920 4732 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.113929 4732 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.113953 4732 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.113979 4732 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.113991 4732 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7eb0179-b292-4a09-a07d-3d9bfe7978f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.127523 4732 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.162169 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.221508 4732 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: E0131 09:23:27.322455 4732 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 09:23:27 crc kubenswrapper[4732]: E0131 09:23:27.322520 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts podName:1654407d-7276-4839-839d-1244759c4ad2 nodeName:}" failed. No retries permitted until 2026-01-31 09:23:28.322504962 +0000 UTC m=+1346.628381166 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts") pod "keystone5b86-account-delete-qkt4p" (UID: "1654407d-7276-4839-839d-1244759c4ad2") : configmap "openstack-scripts" not found Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.330176 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.423892 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3b1cc40-9985-45d8-bb06-0676ff188c6c-config-data-custom\") pod \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.423970 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3b1cc40-9985-45d8-bb06-0676ff188c6c-logs\") pod \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.424002 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c684\" (UniqueName: \"kubernetes.io/projected/d3b1cc40-9985-45d8-bb06-0676ff188c6c-kube-api-access-5c684\") pod \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.424097 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b1cc40-9985-45d8-bb06-0676ff188c6c-config-data\") pod \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\" (UID: \"d3b1cc40-9985-45d8-bb06-0676ff188c6c\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.425439 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3b1cc40-9985-45d8-bb06-0676ff188c6c-logs" (OuterVolumeSpecName: "logs") pod "d3b1cc40-9985-45d8-bb06-0676ff188c6c" (UID: "d3b1cc40-9985-45d8-bb06-0676ff188c6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.429655 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b1cc40-9985-45d8-bb06-0676ff188c6c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d3b1cc40-9985-45d8-bb06-0676ff188c6c" (UID: "d3b1cc40-9985-45d8-bb06-0676ff188c6c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.429887 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3b1cc40-9985-45d8-bb06-0676ff188c6c-kube-api-access-5c684" (OuterVolumeSpecName: "kube-api-access-5c684") pod "d3b1cc40-9985-45d8-bb06-0676ff188c6c" (UID: "d3b1cc40-9985-45d8-bb06-0676ff188c6c"). InnerVolumeSpecName "kube-api-access-5c684". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.457896 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b1cc40-9985-45d8-bb06-0676ff188c6c-config-data" (OuterVolumeSpecName: "config-data") pod "d3b1cc40-9985-45d8-bb06-0676ff188c6c" (UID: "d3b1cc40-9985-45d8-bb06-0676ff188c6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.464438 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.527066 4732 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b1cc40-9985-45d8-bb06-0676ff188c6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.527109 4732 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3b1cc40-9985-45d8-bb06-0676ff188c6c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.527123 4732 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3b1cc40-9985-45d8-bb06-0676ff188c6c-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.527134 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c684\" (UniqueName: \"kubernetes.io/projected/d3b1cc40-9985-45d8-bb06-0676ff188c6c-kube-api-access-5c684\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.628097 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-config-data\") pod \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.628142 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-config-data-custom\") pod \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.628172 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lp6p\" (UniqueName: \"kubernetes.io/projected/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-kube-api-access-5lp6p\") pod \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.628217 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-logs\") pod \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\" (UID: \"4981d9a9-898f-49ff-809d-58c7ca3bd2a3\") " Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.628438 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wpw4\" (UniqueName: \"kubernetes.io/projected/17f70688-a1f8-4465-821b-48f5381ff96c-kube-api-access-4wpw4\") pod \"root-account-create-update-x22v8\" (UID: \"17f70688-a1f8-4465-821b-48f5381ff96c\") " pod="swift-kuttl-tests/root-account-create-update-x22v8" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.628473 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17f70688-a1f8-4465-821b-48f5381ff96c-operator-scripts\") pod \"root-account-create-update-x22v8\" (UID: \"17f70688-a1f8-4465-821b-48f5381ff96c\") " pod="swift-kuttl-tests/root-account-create-update-x22v8" Jan 31 09:23:27 crc kubenswrapper[4732]: E0131 09:23:27.628651 4732 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 09:23:27 crc kubenswrapper[4732]: E0131 09:23:27.628732 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17f70688-a1f8-4465-821b-48f5381ff96c-operator-scripts podName:17f70688-a1f8-4465-821b-48f5381ff96c nodeName:}" failed. No retries permitted until 2026-01-31 09:23:29.62871158 +0000 UTC m=+1347.934587784 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/17f70688-a1f8-4465-821b-48f5381ff96c-operator-scripts") pod "root-account-create-update-x22v8" (UID: "17f70688-a1f8-4465-821b-48f5381ff96c") : configmap "openstack-scripts" not found Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.628947 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-logs" (OuterVolumeSpecName: "logs") pod "4981d9a9-898f-49ff-809d-58c7ca3bd2a3" (UID: "4981d9a9-898f-49ff-809d-58c7ca3bd2a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.631214 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-kube-api-access-5lp6p" (OuterVolumeSpecName: "kube-api-access-5lp6p") pod "4981d9a9-898f-49ff-809d-58c7ca3bd2a3" (UID: "4981d9a9-898f-49ff-809d-58c7ca3bd2a3"). InnerVolumeSpecName "kube-api-access-5lp6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.632113 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4981d9a9-898f-49ff-809d-58c7ca3bd2a3" (UID: "4981d9a9-898f-49ff-809d-58c7ca3bd2a3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: E0131 09:23:27.634144 4732 projected.go:194] Error preparing data for projected volume kube-api-access-4wpw4 for pod swift-kuttl-tests/root-account-create-update-x22v8: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 09:23:27 crc kubenswrapper[4732]: E0131 09:23:27.634195 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17f70688-a1f8-4465-821b-48f5381ff96c-kube-api-access-4wpw4 podName:17f70688-a1f8-4465-821b-48f5381ff96c nodeName:}" failed. No retries permitted until 2026-01-31 09:23:29.634177339 +0000 UTC m=+1347.940053543 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4wpw4" (UniqueName: "kubernetes.io/projected/17f70688-a1f8-4465-821b-48f5381ff96c-kube-api-access-4wpw4") pod "root-account-create-update-x22v8" (UID: "17f70688-a1f8-4465-821b-48f5381ff96c") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.662864 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-config-data" (OuterVolumeSpecName: "config-data") pod "4981d9a9-898f-49ff-809d-58c7ca3bd2a3" (UID: "4981d9a9-898f-49ff-809d-58c7ca3bd2a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.709231 4732 generic.go:334] "Generic (PLEG): container finished" podID="c0d4fa62-a33c-4ab2-a446-697994c1541e" containerID="3bead2528c2ca5acd2f850812b4f5c0dc70486b1bf295a4a4a607b3a99c2b59e" exitCode=0 Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.709303 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"c0d4fa62-a33c-4ab2-a446-697994c1541e","Type":"ContainerDied","Data":"3bead2528c2ca5acd2f850812b4f5c0dc70486b1bf295a4a4a607b3a99c2b59e"} Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.712974 4732 generic.go:334] "Generic (PLEG): container finished" podID="4981d9a9-898f-49ff-809d-58c7ca3bd2a3" containerID="341aa4e395d7929014e446a8184c4ab222c0434acdf71dbf1596f5fc5c67f4a8" exitCode=0 Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.713011 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" event={"ID":"4981d9a9-898f-49ff-809d-58c7ca3bd2a3","Type":"ContainerDied","Data":"341aa4e395d7929014e446a8184c4ab222c0434acdf71dbf1596f5fc5c67f4a8"} Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.713046 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" event={"ID":"4981d9a9-898f-49ff-809d-58c7ca3bd2a3","Type":"ContainerDied","Data":"dc8a708608424d3770f138159a52c830a7b371ad68ddd44083bf847d118f3337"} Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.713065 4732 scope.go:117] "RemoveContainer" containerID="341aa4e395d7929014e446a8184c4ab222c0434acdf71dbf1596f5fc5c67f4a8" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.713086 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.729885 4732 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.729916 4732 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.729930 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lp6p\" (UniqueName: \"kubernetes.io/projected/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-kube-api-access-5lp6p\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.729942 4732 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4981d9a9-898f-49ff-809d-58c7ca3bd2a3-logs\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.739878 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.740383 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f7eb0179-b292-4a09-a07d-3d9bfe7978f3","Type":"ContainerDied","Data":"a51f6b605ece49b9fb83731b6d6e01366d3ebfefee0aa6fe01b529f67148475c"} Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.748246 4732 scope.go:117] "RemoveContainer" containerID="2d6ada7bb6e0aa8ec000e73dc78802992d92ff017f59a57893472c54d628fcd1" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.754451 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq"] Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.755572 4732 generic.go:334] "Generic (PLEG): container finished" podID="d3b1cc40-9985-45d8-bb06-0676ff188c6c" containerID="9bf2a11eb61aee168266496fbc63cf87fd0863d3de9bd60d727904eb9b9f54bf" exitCode=0 Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.756035 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" event={"ID":"d3b1cc40-9985-45d8-bb06-0676ff188c6c","Type":"ContainerDied","Data":"9bf2a11eb61aee168266496fbc63cf87fd0863d3de9bd60d727904eb9b9f54bf"} Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.756081 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" event={"ID":"d3b1cc40-9985-45d8-bb06-0676ff188c6c","Type":"ContainerDied","Data":"349666c65ccee57c71b989263a2d6e9c2979da105d5b7e91fe8eeec62ef91646"} Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.756082 4732 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" secret="" err="secret \"galera-openstack-dockercfg-4btb4\" not found" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.756134 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-597f49d4f-8nh24" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.756335 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-x22v8" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.762372 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-api-5bb4486f46-rmsgq"] Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.772984 4732 scope.go:117] "RemoveContainer" containerID="341aa4e395d7929014e446a8184c4ab222c0434acdf71dbf1596f5fc5c67f4a8" Jan 31 09:23:27 crc kubenswrapper[4732]: E0131 09:23:27.773452 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"341aa4e395d7929014e446a8184c4ab222c0434acdf71dbf1596f5fc5c67f4a8\": container with ID starting with 341aa4e395d7929014e446a8184c4ab222c0434acdf71dbf1596f5fc5c67f4a8 not found: ID does not exist" containerID="341aa4e395d7929014e446a8184c4ab222c0434acdf71dbf1596f5fc5c67f4a8" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.773513 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"341aa4e395d7929014e446a8184c4ab222c0434acdf71dbf1596f5fc5c67f4a8"} err="failed to get container status \"341aa4e395d7929014e446a8184c4ab222c0434acdf71dbf1596f5fc5c67f4a8\": rpc error: code = NotFound desc = could not find container \"341aa4e395d7929014e446a8184c4ab222c0434acdf71dbf1596f5fc5c67f4a8\": container with ID starting with 341aa4e395d7929014e446a8184c4ab222c0434acdf71dbf1596f5fc5c67f4a8 not found: ID does not exist" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.773546 4732 scope.go:117] "RemoveContainer" containerID="2d6ada7bb6e0aa8ec000e73dc78802992d92ff017f59a57893472c54d628fcd1" Jan 31 09:23:27 crc kubenswrapper[4732]: E0131 09:23:27.774424 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d6ada7bb6e0aa8ec000e73dc78802992d92ff017f59a57893472c54d628fcd1\": container with ID starting with 2d6ada7bb6e0aa8ec000e73dc78802992d92ff017f59a57893472c54d628fcd1 not found: ID does not exist" containerID="2d6ada7bb6e0aa8ec000e73dc78802992d92ff017f59a57893472c54d628fcd1" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.774459 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d6ada7bb6e0aa8ec000e73dc78802992d92ff017f59a57893472c54d628fcd1"} err="failed to get container status \"2d6ada7bb6e0aa8ec000e73dc78802992d92ff017f59a57893472c54d628fcd1\": rpc error: code = NotFound desc = could not find container \"2d6ada7bb6e0aa8ec000e73dc78802992d92ff017f59a57893472c54d628fcd1\": container with ID starting with 2d6ada7bb6e0aa8ec000e73dc78802992d92ff017f59a57893472c54d628fcd1 not found: ID does not exist" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.774480 4732 scope.go:117] "RemoveContainer" containerID="bf4dd15bf1d89a707a1cc67b69c360511a62db3a9d76bcd538b729b470197a50" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.813113 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.820294 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.825409 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/rabbitmq-server-0" podUID="dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" containerName="rabbitmq" containerID="cri-o://5671dbb66e09093ec1e8398957e4f56f878e7982d9afc91bf35fc2570b73092d" gracePeriod=604800 Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.830845 4732 scope.go:117] "RemoveContainer" containerID="e5b6220dc9c37c2ae0ded221270ed7b6d934a81efa02b526750d6847a367f930" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.835803 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-worker-597f49d4f-8nh24"] Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.841689 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-worker-597f49d4f-8nh24"] Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.860203 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/root-account-create-update-x22v8"] Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.867949 4732 scope.go:117] "RemoveContainer" containerID="9bf2a11eb61aee168266496fbc63cf87fd0863d3de9bd60d727904eb9b9f54bf" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.869381 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/root-account-create-update-x22v8"] Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.956757 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Jan 31 09:23:27 crc kubenswrapper[4732]: I0131 09:23:27.977547 4732 scope.go:117] "RemoveContainer" containerID="61654a37d88891f02e083badeb8b5cfe93229dadee0e8b7d4de9237b19518962" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.042002 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvmz6\" (UniqueName: \"kubernetes.io/projected/c0d4fa62-a33c-4ab2-a446-697994c1541e-kube-api-access-rvmz6\") pod \"c0d4fa62-a33c-4ab2-a446-697994c1541e\" (UID: \"c0d4fa62-a33c-4ab2-a446-697994c1541e\") " Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.042037 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0d4fa62-a33c-4ab2-a446-697994c1541e-kolla-config\") pod \"c0d4fa62-a33c-4ab2-a446-697994c1541e\" (UID: \"c0d4fa62-a33c-4ab2-a446-697994c1541e\") " Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.042064 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c0d4fa62-a33c-4ab2-a446-697994c1541e-config-data\") pod \"c0d4fa62-a33c-4ab2-a446-697994c1541e\" (UID: \"c0d4fa62-a33c-4ab2-a446-697994c1541e\") " Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.042319 4732 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17f70688-a1f8-4465-821b-48f5381ff96c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.042329 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wpw4\" (UniqueName: \"kubernetes.io/projected/17f70688-a1f8-4465-821b-48f5381ff96c-kube-api-access-4wpw4\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.042758 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d4fa62-a33c-4ab2-a446-697994c1541e-config-data" (OuterVolumeSpecName: "config-data") pod "c0d4fa62-a33c-4ab2-a446-697994c1541e" (UID: "c0d4fa62-a33c-4ab2-a446-697994c1541e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.043420 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d4fa62-a33c-4ab2-a446-697994c1541e-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c0d4fa62-a33c-4ab2-a446-697994c1541e" (UID: "c0d4fa62-a33c-4ab2-a446-697994c1541e"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.045342 4732 scope.go:117] "RemoveContainer" containerID="9bf2a11eb61aee168266496fbc63cf87fd0863d3de9bd60d727904eb9b9f54bf" Jan 31 09:23:28 crc kubenswrapper[4732]: E0131 09:23:28.048974 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bf2a11eb61aee168266496fbc63cf87fd0863d3de9bd60d727904eb9b9f54bf\": container with ID starting with 9bf2a11eb61aee168266496fbc63cf87fd0863d3de9bd60d727904eb9b9f54bf not found: ID does not exist" containerID="9bf2a11eb61aee168266496fbc63cf87fd0863d3de9bd60d727904eb9b9f54bf" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.049010 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf2a11eb61aee168266496fbc63cf87fd0863d3de9bd60d727904eb9b9f54bf"} err="failed to get container status \"9bf2a11eb61aee168266496fbc63cf87fd0863d3de9bd60d727904eb9b9f54bf\": rpc error: code = NotFound desc = could not find container \"9bf2a11eb61aee168266496fbc63cf87fd0863d3de9bd60d727904eb9b9f54bf\": container with ID starting with 9bf2a11eb61aee168266496fbc63cf87fd0863d3de9bd60d727904eb9b9f54bf not found: ID does not exist" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.049034 4732 scope.go:117] "RemoveContainer" containerID="61654a37d88891f02e083badeb8b5cfe93229dadee0e8b7d4de9237b19518962" Jan 31 09:23:28 crc kubenswrapper[4732]: E0131 09:23:28.050624 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61654a37d88891f02e083badeb8b5cfe93229dadee0e8b7d4de9237b19518962\": container with ID starting with 61654a37d88891f02e083badeb8b5cfe93229dadee0e8b7d4de9237b19518962 not found: ID does not exist" containerID="61654a37d88891f02e083badeb8b5cfe93229dadee0e8b7d4de9237b19518962" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.050644 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61654a37d88891f02e083badeb8b5cfe93229dadee0e8b7d4de9237b19518962"} err="failed to get container status \"61654a37d88891f02e083badeb8b5cfe93229dadee0e8b7d4de9237b19518962\": rpc error: code = NotFound desc = could not find container \"61654a37d88891f02e083badeb8b5cfe93229dadee0e8b7d4de9237b19518962\": container with ID starting with 61654a37d88891f02e083badeb8b5cfe93229dadee0e8b7d4de9237b19518962 not found: ID does not exist" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.062057 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d4fa62-a33c-4ab2-a446-697994c1541e-kube-api-access-rvmz6" (OuterVolumeSpecName: "kube-api-access-rvmz6") pod "c0d4fa62-a33c-4ab2-a446-697994c1541e" (UID: "c0d4fa62-a33c-4ab2-a446-697994c1541e"). InnerVolumeSpecName "kube-api-access-rvmz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.095396 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/openstack-galera-1" podUID="0682a582-79d6-4286-9a43-e4a258dde73f" containerName="galera" containerID="cri-o://ee4de6ba1b0a29ba82881c9260150c9bf01cda625efde0b6a945316636e5517e" gracePeriod=28 Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.143717 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvmz6\" (UniqueName: \"kubernetes.io/projected/c0d4fa62-a33c-4ab2-a446-697994c1541e-kube-api-access-rvmz6\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.143748 4732 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0d4fa62-a33c-4ab2-a446-697994c1541e-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.143759 4732 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c0d4fa62-a33c-4ab2-a446-697994c1541e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:28 crc kubenswrapper[4732]: E0131 09:23:28.346779 4732 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 09:23:28 crc kubenswrapper[4732]: E0131 09:23:28.346843 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts podName:1654407d-7276-4839-839d-1244759c4ad2 nodeName:}" failed. No retries permitted until 2026-01-31 09:23:30.346827901 +0000 UTC m=+1348.652704105 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts") pod "keystone5b86-account-delete-qkt4p" (UID: "1654407d-7276-4839-839d-1244759c4ad2") : configmap "openstack-scripts" not found Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.432837 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.551607 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f70688-a1f8-4465-821b-48f5381ff96c" path="/var/lib/kubelet/pods/17f70688-a1f8-4465-821b-48f5381ff96c/volumes" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.551962 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4981d9a9-898f-49ff-809d-58c7ca3bd2a3" path="/var/lib/kubelet/pods/4981d9a9-898f-49ff-809d-58c7ca3bd2a3/volumes" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.552514 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3b1cc40-9985-45d8-bb06-0676ff188c6c" path="/var/lib/kubelet/pods/d3b1cc40-9985-45d8-bb06-0676ff188c6c/volumes" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.553657 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7eb0179-b292-4a09-a07d-3d9bfe7978f3" path="/var/lib/kubelet/pods/f7eb0179-b292-4a09-a07d-3d9bfe7978f3/volumes" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.556054 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-config-data\") pod \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.556086 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-fernet-keys\") pod \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.556103 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-scripts\") pod \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.556265 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-credential-keys\") pod \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.556307 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsj4z\" (UniqueName: \"kubernetes.io/projected/1ee84530-efd7-4d83-9aa2-fb9b8b178496-kube-api-access-jsj4z\") pod \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\" (UID: \"1ee84530-efd7-4d83-9aa2-fb9b8b178496\") " Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.560887 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1ee84530-efd7-4d83-9aa2-fb9b8b178496" (UID: "1ee84530-efd7-4d83-9aa2-fb9b8b178496"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.561242 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1ee84530-efd7-4d83-9aa2-fb9b8b178496" (UID: "1ee84530-efd7-4d83-9aa2-fb9b8b178496"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.561307 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee84530-efd7-4d83-9aa2-fb9b8b178496-kube-api-access-jsj4z" (OuterVolumeSpecName: "kube-api-access-jsj4z") pod "1ee84530-efd7-4d83-9aa2-fb9b8b178496" (UID: "1ee84530-efd7-4d83-9aa2-fb9b8b178496"). InnerVolumeSpecName "kube-api-access-jsj4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.561726 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-scripts" (OuterVolumeSpecName: "scripts") pod "1ee84530-efd7-4d83-9aa2-fb9b8b178496" (UID: "1ee84530-efd7-4d83-9aa2-fb9b8b178496"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.575968 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-config-data" (OuterVolumeSpecName: "config-data") pod "1ee84530-efd7-4d83-9aa2-fb9b8b178496" (UID: "1ee84530-efd7-4d83-9aa2-fb9b8b178496"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.610641 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-db-create-hffnv"] Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.618566 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-db-create-hffnv"] Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.627997 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-f734-account-create-update-wkhs4"] Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.633501 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbicanf734-account-delete-g264z"] Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.639572 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-f734-account-create-update-wkhs4"] Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.644271 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbicanf734-account-delete-g264z"] Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.657783 4732 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.657831 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsj4z\" (UniqueName: \"kubernetes.io/projected/1ee84530-efd7-4d83-9aa2-fb9b8b178496-kube-api-access-jsj4z\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.657847 4732 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.657859 4732 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.657870 4732 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee84530-efd7-4d83-9aa2-fb9b8b178496-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.781523 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"c0d4fa62-a33c-4ab2-a446-697994c1541e","Type":"ContainerDied","Data":"d82037df40928405007453003da8d4a3924f1437dbd107f02f6b845354809fb0"} Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.781592 4732 scope.go:117] "RemoveContainer" containerID="3bead2528c2ca5acd2f850812b4f5c0dc70486b1bf295a4a4a607b3a99c2b59e" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.781545 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.782820 4732 generic.go:334] "Generic (PLEG): container finished" podID="1ee84530-efd7-4d83-9aa2-fb9b8b178496" containerID="2242e86748f2123e7c2371fd3207440c5237f2bf59a11e74432d9fe62c372932" exitCode=0 Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.782853 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" event={"ID":"1ee84530-efd7-4d83-9aa2-fb9b8b178496","Type":"ContainerDied","Data":"2242e86748f2123e7c2371fd3207440c5237f2bf59a11e74432d9fe62c372932"} Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.782996 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" event={"ID":"1ee84530-efd7-4d83-9aa2-fb9b8b178496","Type":"ContainerDied","Data":"dab5aaa05d738eb5bfa3b09e12be7ced9a61a97b3de7389937699c76857d4ec7"} Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.782873 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-7959cb4f8b-4vxsg" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.803573 4732 scope.go:117] "RemoveContainer" containerID="2242e86748f2123e7c2371fd3207440c5237f2bf59a11e74432d9fe62c372932" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.825742 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.828149 4732 scope.go:117] "RemoveContainer" containerID="2242e86748f2123e7c2371fd3207440c5237f2bf59a11e74432d9fe62c372932" Jan 31 09:23:28 crc kubenswrapper[4732]: E0131 09:23:28.828627 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2242e86748f2123e7c2371fd3207440c5237f2bf59a11e74432d9fe62c372932\": container with ID starting with 2242e86748f2123e7c2371fd3207440c5237f2bf59a11e74432d9fe62c372932 not found: ID does not exist" containerID="2242e86748f2123e7c2371fd3207440c5237f2bf59a11e74432d9fe62c372932" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.828966 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2242e86748f2123e7c2371fd3207440c5237f2bf59a11e74432d9fe62c372932"} err="failed to get container status \"2242e86748f2123e7c2371fd3207440c5237f2bf59a11e74432d9fe62c372932\": rpc error: code = NotFound desc = could not find container \"2242e86748f2123e7c2371fd3207440c5237f2bf59a11e74432d9fe62c372932\": container with ID starting with 2242e86748f2123e7c2371fd3207440c5237f2bf59a11e74432d9fe62c372932 not found: ID does not exist" Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.830713 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.838519 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-7959cb4f8b-4vxsg"] Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.844762 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-7959cb4f8b-4vxsg"] Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.940150 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc"] Jan 31 09:23:28 crc kubenswrapper[4732]: I0131 09:23:28.940458 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" podUID="17f93e90-1e9a-439c-a130-487ebf54ad10" containerName="manager" containerID="cri-o://dc0300becefa39e137a97b82074b1aa44c6492acb1b746336056c96e36e23697" gracePeriod=10 Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.218504 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-zhkcf"] Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.219197 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/swift-operator-index-zhkcf" podUID="24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf" containerName="registry-server" containerID="cri-o://27e7fcfe2e3b7ddaff4e021897bbeb8cb28014e689190c68e97e8b5269f0181f" gracePeriod=30 Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.261551 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv"] Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.268928 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/0e62aba39cc8fdb97a45279afc017a13ec54a5c105520c9343a09281d2km4rv"] Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.400251 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.425421 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.570531 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-017be409-2d02-48b0-bb67-3403111bd6b9\") pod \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.570577 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-pod-info\") pod \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.570600 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-erlang-cookie-secret\") pod \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.570683 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxmtn\" (UniqueName: \"kubernetes.io/projected/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-kube-api-access-xxmtn\") pod \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.570702 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-confd\") pod \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.570720 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17f93e90-1e9a-439c-a130-487ebf54ad10-webhook-cert\") pod \"17f93e90-1e9a-439c-a130-487ebf54ad10\" (UID: \"17f93e90-1e9a-439c-a130-487ebf54ad10\") " Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.570749 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-erlang-cookie\") pod \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.570788 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-plugins-conf\") pod \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.570805 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17f93e90-1e9a-439c-a130-487ebf54ad10-apiservice-cert\") pod \"17f93e90-1e9a-439c-a130-487ebf54ad10\" (UID: \"17f93e90-1e9a-439c-a130-487ebf54ad10\") " Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.570827 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-plugins\") pod \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.570851 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28txj\" (UniqueName: \"kubernetes.io/projected/17f93e90-1e9a-439c-a130-487ebf54ad10-kube-api-access-28txj\") pod \"17f93e90-1e9a-439c-a130-487ebf54ad10\" (UID: \"17f93e90-1e9a-439c-a130-487ebf54ad10\") " Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.571445 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" (UID: "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.572449 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" (UID: "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.572458 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" (UID: "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.577204 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" (UID: "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.577218 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f93e90-1e9a-439c-a130-487ebf54ad10-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "17f93e90-1e9a-439c-a130-487ebf54ad10" (UID: "17f93e90-1e9a-439c-a130-487ebf54ad10"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.577260 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-kube-api-access-xxmtn" (OuterVolumeSpecName: "kube-api-access-xxmtn") pod "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" (UID: "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd"). InnerVolumeSpecName "kube-api-access-xxmtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.577297 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f93e90-1e9a-439c-a130-487ebf54ad10-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "17f93e90-1e9a-439c-a130-487ebf54ad10" (UID: "17f93e90-1e9a-439c-a130-487ebf54ad10"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.577317 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f93e90-1e9a-439c-a130-487ebf54ad10-kube-api-access-28txj" (OuterVolumeSpecName: "kube-api-access-28txj") pod "17f93e90-1e9a-439c-a130-487ebf54ad10" (UID: "17f93e90-1e9a-439c-a130-487ebf54ad10"). InnerVolumeSpecName "kube-api-access-28txj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.578171 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-pod-info" (OuterVolumeSpecName: "pod-info") pod "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" (UID: "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 09:23:29 crc kubenswrapper[4732]: E0131 09:23:29.578302 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-017be409-2d02-48b0-bb67-3403111bd6b9 podName:dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd nodeName:}" failed. No retries permitted until 2026-01-31 09:23:30.078287858 +0000 UTC m=+1348.384164062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-017be409-2d02-48b0-bb67-3403111bd6b9") pod "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" (UID: "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.579624 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-zhkcf" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.632513 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" (UID: "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.672630 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59577\" (UniqueName: \"kubernetes.io/projected/24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf-kube-api-access-59577\") pod \"24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf\" (UID: \"24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf\") " Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.673385 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28txj\" (UniqueName: \"kubernetes.io/projected/17f93e90-1e9a-439c-a130-487ebf54ad10-kube-api-access-28txj\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.674135 4732 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.674156 4732 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.674168 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxmtn\" (UniqueName: \"kubernetes.io/projected/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-kube-api-access-xxmtn\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.674177 4732 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.674186 4732 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17f93e90-1e9a-439c-a130-487ebf54ad10-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.674195 4732 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.674204 4732 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17f93e90-1e9a-439c-a130-487ebf54ad10-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.674212 4732 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.674219 4732 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.676585 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf-kube-api-access-59577" (OuterVolumeSpecName: "kube-api-access-59577") pod "24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf" (UID: "24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf"). InnerVolumeSpecName "kube-api-access-59577". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.777161 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59577\" (UniqueName: \"kubernetes.io/projected/24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf-kube-api-access-59577\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.807007 4732 generic.go:334] "Generic (PLEG): container finished" podID="17f93e90-1e9a-439c-a130-487ebf54ad10" containerID="dc0300becefa39e137a97b82074b1aa44c6492acb1b746336056c96e36e23697" exitCode=0 Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.807120 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" event={"ID":"17f93e90-1e9a-439c-a130-487ebf54ad10","Type":"ContainerDied","Data":"dc0300becefa39e137a97b82074b1aa44c6492acb1b746336056c96e36e23697"} Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.807882 4732 scope.go:117] "RemoveContainer" containerID="dc0300becefa39e137a97b82074b1aa44c6492acb1b746336056c96e36e23697" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.807900 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.808719 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc" event={"ID":"17f93e90-1e9a-439c-a130-487ebf54ad10","Type":"ContainerDied","Data":"3190f0d353720c75142ebd0bfb06e439e4a2407802386eda349902b5c0a59659"} Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.810231 4732 generic.go:334] "Generic (PLEG): container finished" podID="24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf" containerID="27e7fcfe2e3b7ddaff4e021897bbeb8cb28014e689190c68e97e8b5269f0181f" exitCode=0 Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.810285 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-zhkcf" event={"ID":"24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf","Type":"ContainerDied","Data":"27e7fcfe2e3b7ddaff4e021897bbeb8cb28014e689190c68e97e8b5269f0181f"} Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.810305 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-zhkcf" event={"ID":"24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf","Type":"ContainerDied","Data":"8514ba99bedbc8f5d369f906a000fdaa79e2f95c8cdc60f9e5782dca2dcdc8ab"} Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.810368 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-zhkcf" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.813481 4732 generic.go:334] "Generic (PLEG): container finished" podID="dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" containerID="5671dbb66e09093ec1e8398957e4f56f878e7982d9afc91bf35fc2570b73092d" exitCode=0 Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.813553 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd","Type":"ContainerDied","Data":"5671dbb66e09093ec1e8398957e4f56f878e7982d9afc91bf35fc2570b73092d"} Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.813570 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.813582 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd","Type":"ContainerDied","Data":"c0069b5700dea7f3177d96246d005bd742c680b2404c8813992998fda3388237"} Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.852888 4732 scope.go:117] "RemoveContainer" containerID="dc0300becefa39e137a97b82074b1aa44c6492acb1b746336056c96e36e23697" Jan 31 09:23:29 crc kubenswrapper[4732]: E0131 09:23:29.854376 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc0300becefa39e137a97b82074b1aa44c6492acb1b746336056c96e36e23697\": container with ID starting with dc0300becefa39e137a97b82074b1aa44c6492acb1b746336056c96e36e23697 not found: ID does not exist" containerID="dc0300becefa39e137a97b82074b1aa44c6492acb1b746336056c96e36e23697" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.854506 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc0300becefa39e137a97b82074b1aa44c6492acb1b746336056c96e36e23697"} err="failed to get container status \"dc0300becefa39e137a97b82074b1aa44c6492acb1b746336056c96e36e23697\": rpc error: code = NotFound desc = could not find container \"dc0300becefa39e137a97b82074b1aa44c6492acb1b746336056c96e36e23697\": container with ID starting with dc0300becefa39e137a97b82074b1aa44c6492acb1b746336056c96e36e23697 not found: ID does not exist" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.854602 4732 scope.go:117] "RemoveContainer" containerID="27e7fcfe2e3b7ddaff4e021897bbeb8cb28014e689190c68e97e8b5269f0181f" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.855813 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc"] Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.867010 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6786c8c4f8-x5zqc"] Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.869249 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-zhkcf"] Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.873098 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/swift-operator-index-zhkcf"] Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.875734 4732 scope.go:117] "RemoveContainer" containerID="27e7fcfe2e3b7ddaff4e021897bbeb8cb28014e689190c68e97e8b5269f0181f" Jan 31 09:23:29 crc kubenswrapper[4732]: E0131 09:23:29.878375 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27e7fcfe2e3b7ddaff4e021897bbeb8cb28014e689190c68e97e8b5269f0181f\": container with ID starting with 27e7fcfe2e3b7ddaff4e021897bbeb8cb28014e689190c68e97e8b5269f0181f not found: ID does not exist" containerID="27e7fcfe2e3b7ddaff4e021897bbeb8cb28014e689190c68e97e8b5269f0181f" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.878444 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e7fcfe2e3b7ddaff4e021897bbeb8cb28014e689190c68e97e8b5269f0181f"} err="failed to get container status \"27e7fcfe2e3b7ddaff4e021897bbeb8cb28014e689190c68e97e8b5269f0181f\": rpc error: code = NotFound desc = could not find container \"27e7fcfe2e3b7ddaff4e021897bbeb8cb28014e689190c68e97e8b5269f0181f\": container with ID starting with 27e7fcfe2e3b7ddaff4e021897bbeb8cb28014e689190c68e97e8b5269f0181f not found: ID does not exist" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.878479 4732 scope.go:117] "RemoveContainer" containerID="5671dbb66e09093ec1e8398957e4f56f878e7982d9afc91bf35fc2570b73092d" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.919400 4732 scope.go:117] "RemoveContainer" containerID="cb8d487c03d78ac3a07f4803e072d42f9685a260011084fe0b63e98b41a409fb" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.945188 4732 scope.go:117] "RemoveContainer" containerID="5671dbb66e09093ec1e8398957e4f56f878e7982d9afc91bf35fc2570b73092d" Jan 31 09:23:29 crc kubenswrapper[4732]: E0131 09:23:29.945623 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5671dbb66e09093ec1e8398957e4f56f878e7982d9afc91bf35fc2570b73092d\": container with ID starting with 5671dbb66e09093ec1e8398957e4f56f878e7982d9afc91bf35fc2570b73092d not found: ID does not exist" containerID="5671dbb66e09093ec1e8398957e4f56f878e7982d9afc91bf35fc2570b73092d" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.945658 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5671dbb66e09093ec1e8398957e4f56f878e7982d9afc91bf35fc2570b73092d"} err="failed to get container status \"5671dbb66e09093ec1e8398957e4f56f878e7982d9afc91bf35fc2570b73092d\": rpc error: code = NotFound desc = could not find container \"5671dbb66e09093ec1e8398957e4f56f878e7982d9afc91bf35fc2570b73092d\": container with ID starting with 5671dbb66e09093ec1e8398957e4f56f878e7982d9afc91bf35fc2570b73092d not found: ID does not exist" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.945736 4732 scope.go:117] "RemoveContainer" containerID="cb8d487c03d78ac3a07f4803e072d42f9685a260011084fe0b63e98b41a409fb" Jan 31 09:23:29 crc kubenswrapper[4732]: E0131 09:23:29.947060 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb8d487c03d78ac3a07f4803e072d42f9685a260011084fe0b63e98b41a409fb\": container with ID starting with cb8d487c03d78ac3a07f4803e072d42f9685a260011084fe0b63e98b41a409fb not found: ID does not exist" containerID="cb8d487c03d78ac3a07f4803e072d42f9685a260011084fe0b63e98b41a409fb" Jan 31 09:23:29 crc kubenswrapper[4732]: I0131 09:23:29.947086 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb8d487c03d78ac3a07f4803e072d42f9685a260011084fe0b63e98b41a409fb"} err="failed to get container status \"cb8d487c03d78ac3a07f4803e072d42f9685a260011084fe0b63e98b41a409fb\": rpc error: code = NotFound desc = could not find container \"cb8d487c03d78ac3a07f4803e072d42f9685a260011084fe0b63e98b41a409fb\": container with ID starting with cb8d487c03d78ac3a07f4803e072d42f9685a260011084fe0b63e98b41a409fb not found: ID does not exist" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.084680 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-017be409-2d02-48b0-bb67-3403111bd6b9\") pod \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\" (UID: \"dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd\") " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.089676 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-create-qtddl"] Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.100633 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-create-qtddl"] Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.109960 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-017be409-2d02-48b0-bb67-3403111bd6b9" (OuterVolumeSpecName: "persistence") pod "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" (UID: "dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd"). InnerVolumeSpecName "pvc-017be409-2d02-48b0-bb67-3403111bd6b9". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.116767 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone5b86-account-delete-qkt4p"] Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.116960 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" podUID="1654407d-7276-4839-839d-1244759c4ad2" containerName="mariadb-account-delete" containerID="cri-o://79fac5d2960d6f9193efd13767f68129541f3654fafcc49472d1b08351983069" gracePeriod=30 Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.130014 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk"] Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.132245 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.139463 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-5b86-account-create-update-vh2kk"] Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.142424 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/openstack-galera-0" podUID="616eedfe-830a-4ca8-9c42-a2cfd9352312" containerName="galera" containerID="cri-o://8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511" gracePeriod=26 Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.179844 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.184898 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.190562 4732 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-017be409-2d02-48b0-bb67-3403111bd6b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-017be409-2d02-48b0-bb67-3403111bd6b9\") on node \"crc\" " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.214754 4732 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.214909 4732 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-017be409-2d02-48b0-bb67-3403111bd6b9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-017be409-2d02-48b0-bb67-3403111bd6b9") on node "crc" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.291796 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-operator-scripts\") pod \"0682a582-79d6-4286-9a43-e4a258dde73f\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.292098 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-config-data-default\") pod \"0682a582-79d6-4286-9a43-e4a258dde73f\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.292231 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"0682a582-79d6-4286-9a43-e4a258dde73f\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.292387 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xg64\" (UniqueName: \"kubernetes.io/projected/0682a582-79d6-4286-9a43-e4a258dde73f-kube-api-access-9xg64\") pod \"0682a582-79d6-4286-9a43-e4a258dde73f\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.292435 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0682a582-79d6-4286-9a43-e4a258dde73f" (UID: "0682a582-79d6-4286-9a43-e4a258dde73f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.292499 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "0682a582-79d6-4286-9a43-e4a258dde73f" (UID: "0682a582-79d6-4286-9a43-e4a258dde73f"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.292542 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-kolla-config\") pod \"0682a582-79d6-4286-9a43-e4a258dde73f\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.293113 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "0682a582-79d6-4286-9a43-e4a258dde73f" (UID: "0682a582-79d6-4286-9a43-e4a258dde73f"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.293247 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0682a582-79d6-4286-9a43-e4a258dde73f-config-data-generated\") pod \"0682a582-79d6-4286-9a43-e4a258dde73f\" (UID: \"0682a582-79d6-4286-9a43-e4a258dde73f\") " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.293526 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0682a582-79d6-4286-9a43-e4a258dde73f-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "0682a582-79d6-4286-9a43-e4a258dde73f" (UID: "0682a582-79d6-4286-9a43-e4a258dde73f"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.293728 4732 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0682a582-79d6-4286-9a43-e4a258dde73f-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.293807 4732 reconciler_common.go:293] "Volume detached for volume \"pvc-017be409-2d02-48b0-bb67-3403111bd6b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-017be409-2d02-48b0-bb67-3403111bd6b9\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.293867 4732 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.293926 4732 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.293980 4732 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0682a582-79d6-4286-9a43-e4a258dde73f-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.295380 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0682a582-79d6-4286-9a43-e4a258dde73f-kube-api-access-9xg64" (OuterVolumeSpecName: "kube-api-access-9xg64") pod "0682a582-79d6-4286-9a43-e4a258dde73f" (UID: "0682a582-79d6-4286-9a43-e4a258dde73f"). InnerVolumeSpecName "kube-api-access-9xg64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.299302 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "0682a582-79d6-4286-9a43-e4a258dde73f" (UID: "0682a582-79d6-4286-9a43-e4a258dde73f"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.395409 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xg64\" (UniqueName: \"kubernetes.io/projected/0682a582-79d6-4286-9a43-e4a258dde73f-kube-api-access-9xg64\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.395753 4732 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 31 09:23:30 crc kubenswrapper[4732]: E0131 09:23:30.395541 4732 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 09:23:30 crc kubenswrapper[4732]: E0131 09:23:30.395986 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts podName:1654407d-7276-4839-839d-1244759c4ad2 nodeName:}" failed. No retries permitted until 2026-01-31 09:23:34.395959504 +0000 UTC m=+1352.701835708 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts") pod "keystone5b86-account-delete-qkt4p" (UID: "1654407d-7276-4839-839d-1244759c4ad2") : configmap "openstack-scripts" not found Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.408017 4732 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.497268 4732 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.548959 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f93e90-1e9a-439c-a130-487ebf54ad10" path="/var/lib/kubelet/pods/17f93e90-1e9a-439c-a130-487ebf54ad10/volumes" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.549556 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ee84530-efd7-4d83-9aa2-fb9b8b178496" path="/var/lib/kubelet/pods/1ee84530-efd7-4d83-9aa2-fb9b8b178496/volumes" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.550078 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf" path="/var/lib/kubelet/pods/24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf/volumes" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.550641 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d54d8c7-230a-4831-97a8-d17aef7fa6eb" path="/var/lib/kubelet/pods/6d54d8c7-230a-4831-97a8-d17aef7fa6eb/volumes" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.551839 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92cd61e3-285c-42b8-b382-b8dde5e934b8" path="/var/lib/kubelet/pods/92cd61e3-285c-42b8-b382-b8dde5e934b8/volumes" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.552390 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8db73f4-a3fd-4276-87eb-69db3df2adb6" path="/var/lib/kubelet/pods/a8db73f4-a3fd-4276-87eb-69db3df2adb6/volumes" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.553145 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf9e4683-b288-4b28-9b9b-504461c55a4e" path="/var/lib/kubelet/pods/bf9e4683-b288-4b28-9b9b-504461c55a4e/volumes" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.554216 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d4fa62-a33c-4ab2-a446-697994c1541e" path="/var/lib/kubelet/pods/c0d4fa62-a33c-4ab2-a446-697994c1541e/volumes" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.554655 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c86b412a-c376-48cd-b724-77e5fb6c9347" path="/var/lib/kubelet/pods/c86b412a-c376-48cd-b724-77e5fb6c9347/volumes" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.555606 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" path="/var/lib/kubelet/pods/dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd/volumes" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.556164 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d550c8-968a-4962-9e23-c0c22911913d" path="/var/lib/kubelet/pods/e5d550c8-968a-4962-9e23-c0c22911913d/volumes" Jan 31 09:23:30 crc kubenswrapper[4732]: E0131 09:23:30.661072 4732 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511 is running failed: container process not found" containerID="8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 09:23:30 crc kubenswrapper[4732]: E0131 09:23:30.661426 4732 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511 is running failed: container process not found" containerID="8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 09:23:30 crc kubenswrapper[4732]: E0131 09:23:30.661730 4732 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511 is running failed: container process not found" containerID="8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 31 09:23:30 crc kubenswrapper[4732]: E0131 09:23:30.661791 4732 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511 is running failed: container process not found" probeType="Readiness" pod="swift-kuttl-tests/openstack-galera-0" podUID="616eedfe-830a-4ca8-9c42-a2cfd9352312" containerName="galera" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.804277 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.831862 4732 generic.go:334] "Generic (PLEG): container finished" podID="616eedfe-830a-4ca8-9c42-a2cfd9352312" containerID="8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511" exitCode=0 Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.831896 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.831916 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"616eedfe-830a-4ca8-9c42-a2cfd9352312","Type":"ContainerDied","Data":"8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511"} Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.831963 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"616eedfe-830a-4ca8-9c42-a2cfd9352312","Type":"ContainerDied","Data":"2da8aa3ed8596e5beb1462be0a364f515a0e7e35f648a1fd6f1e41dd41f084dd"} Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.831983 4732 scope.go:117] "RemoveContainer" containerID="8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.834485 4732 generic.go:334] "Generic (PLEG): container finished" podID="0682a582-79d6-4286-9a43-e4a258dde73f" containerID="ee4de6ba1b0a29ba82881c9260150c9bf01cda625efde0b6a945316636e5517e" exitCode=0 Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.834542 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.834571 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"0682a582-79d6-4286-9a43-e4a258dde73f","Type":"ContainerDied","Data":"ee4de6ba1b0a29ba82881c9260150c9bf01cda625efde0b6a945316636e5517e"} Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.834600 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"0682a582-79d6-4286-9a43-e4a258dde73f","Type":"ContainerDied","Data":"e0837550f45b62d919804bb06cc6ab1d6a363eb752c3707a21645f710833b1bc"} Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.860195 4732 scope.go:117] "RemoveContainer" containerID="492d45e71fac2738ff1d26d6c415b6dfd2b83adc568c28c60ba8fc993992d5c4" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.862222 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.868748 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.903785 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-config-data-default\") pod \"616eedfe-830a-4ca8-9c42-a2cfd9352312\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.903852 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-kolla-config\") pod \"616eedfe-830a-4ca8-9c42-a2cfd9352312\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.903903 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616eedfe-830a-4ca8-9c42-a2cfd9352312-config-data-generated\") pod \"616eedfe-830a-4ca8-9c42-a2cfd9352312\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.903927 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-operator-scripts\") pod \"616eedfe-830a-4ca8-9c42-a2cfd9352312\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.903971 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"616eedfe-830a-4ca8-9c42-a2cfd9352312\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.904014 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnkrx\" (UniqueName: \"kubernetes.io/projected/616eedfe-830a-4ca8-9c42-a2cfd9352312-kube-api-access-jnkrx\") pod \"616eedfe-830a-4ca8-9c42-a2cfd9352312\" (UID: \"616eedfe-830a-4ca8-9c42-a2cfd9352312\") " Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.904517 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/616eedfe-830a-4ca8-9c42-a2cfd9352312-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "616eedfe-830a-4ca8-9c42-a2cfd9352312" (UID: "616eedfe-830a-4ca8-9c42-a2cfd9352312"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.904880 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "616eedfe-830a-4ca8-9c42-a2cfd9352312" (UID: "616eedfe-830a-4ca8-9c42-a2cfd9352312"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.904963 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "616eedfe-830a-4ca8-9c42-a2cfd9352312" (UID: "616eedfe-830a-4ca8-9c42-a2cfd9352312"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.905095 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "616eedfe-830a-4ca8-9c42-a2cfd9352312" (UID: "616eedfe-830a-4ca8-9c42-a2cfd9352312"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.908713 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/616eedfe-830a-4ca8-9c42-a2cfd9352312-kube-api-access-jnkrx" (OuterVolumeSpecName: "kube-api-access-jnkrx") pod "616eedfe-830a-4ca8-9c42-a2cfd9352312" (UID: "616eedfe-830a-4ca8-9c42-a2cfd9352312"). InnerVolumeSpecName "kube-api-access-jnkrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.914769 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "mysql-db") pod "616eedfe-830a-4ca8-9c42-a2cfd9352312" (UID: "616eedfe-830a-4ca8-9c42-a2cfd9352312"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.915351 4732 scope.go:117] "RemoveContainer" containerID="8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511" Jan 31 09:23:30 crc kubenswrapper[4732]: E0131 09:23:30.916202 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511\": container with ID starting with 8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511 not found: ID does not exist" containerID="8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.916243 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511"} err="failed to get container status \"8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511\": rpc error: code = NotFound desc = could not find container \"8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511\": container with ID starting with 8526f310ead3f5a33ed7787edf3905aaec728d5abbfb09eb99dc036ea2fa4511 not found: ID does not exist" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.916271 4732 scope.go:117] "RemoveContainer" containerID="492d45e71fac2738ff1d26d6c415b6dfd2b83adc568c28c60ba8fc993992d5c4" Jan 31 09:23:30 crc kubenswrapper[4732]: E0131 09:23:30.916778 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"492d45e71fac2738ff1d26d6c415b6dfd2b83adc568c28c60ba8fc993992d5c4\": container with ID starting with 492d45e71fac2738ff1d26d6c415b6dfd2b83adc568c28c60ba8fc993992d5c4 not found: ID does not exist" containerID="492d45e71fac2738ff1d26d6c415b6dfd2b83adc568c28c60ba8fc993992d5c4" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.916827 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"492d45e71fac2738ff1d26d6c415b6dfd2b83adc568c28c60ba8fc993992d5c4"} err="failed to get container status \"492d45e71fac2738ff1d26d6c415b6dfd2b83adc568c28c60ba8fc993992d5c4\": rpc error: code = NotFound desc = could not find container \"492d45e71fac2738ff1d26d6c415b6dfd2b83adc568c28c60ba8fc993992d5c4\": container with ID starting with 492d45e71fac2738ff1d26d6c415b6dfd2b83adc568c28c60ba8fc993992d5c4 not found: ID does not exist" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.917036 4732 scope.go:117] "RemoveContainer" containerID="ee4de6ba1b0a29ba82881c9260150c9bf01cda625efde0b6a945316636e5517e" Jan 31 09:23:30 crc kubenswrapper[4732]: I0131 09:23:30.981675 4732 scope.go:117] "RemoveContainer" containerID="ce97edc32bdb9fb197af077948d670e3a55f9f7d022545ce9a9c32c0c8222279" Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.004010 4732 scope.go:117] "RemoveContainer" containerID="ee4de6ba1b0a29ba82881c9260150c9bf01cda625efde0b6a945316636e5517e" Jan 31 09:23:31 crc kubenswrapper[4732]: E0131 09:23:31.004536 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee4de6ba1b0a29ba82881c9260150c9bf01cda625efde0b6a945316636e5517e\": container with ID starting with ee4de6ba1b0a29ba82881c9260150c9bf01cda625efde0b6a945316636e5517e not found: ID does not exist" containerID="ee4de6ba1b0a29ba82881c9260150c9bf01cda625efde0b6a945316636e5517e" Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.004575 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee4de6ba1b0a29ba82881c9260150c9bf01cda625efde0b6a945316636e5517e"} err="failed to get container status \"ee4de6ba1b0a29ba82881c9260150c9bf01cda625efde0b6a945316636e5517e\": rpc error: code = NotFound desc = could not find container \"ee4de6ba1b0a29ba82881c9260150c9bf01cda625efde0b6a945316636e5517e\": container with ID starting with ee4de6ba1b0a29ba82881c9260150c9bf01cda625efde0b6a945316636e5517e not found: ID does not exist" Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.004601 4732 scope.go:117] "RemoveContainer" containerID="ce97edc32bdb9fb197af077948d670e3a55f9f7d022545ce9a9c32c0c8222279" Jan 31 09:23:31 crc kubenswrapper[4732]: E0131 09:23:31.004958 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce97edc32bdb9fb197af077948d670e3a55f9f7d022545ce9a9c32c0c8222279\": container with ID starting with ce97edc32bdb9fb197af077948d670e3a55f9f7d022545ce9a9c32c0c8222279 not found: ID does not exist" containerID="ce97edc32bdb9fb197af077948d670e3a55f9f7d022545ce9a9c32c0c8222279" Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.004991 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce97edc32bdb9fb197af077948d670e3a55f9f7d022545ce9a9c32c0c8222279"} err="failed to get container status \"ce97edc32bdb9fb197af077948d670e3a55f9f7d022545ce9a9c32c0c8222279\": rpc error: code = NotFound desc = could not find container \"ce97edc32bdb9fb197af077948d670e3a55f9f7d022545ce9a9c32c0c8222279\": container with ID starting with ce97edc32bdb9fb197af077948d670e3a55f9f7d022545ce9a9c32c0c8222279 not found: ID does not exist" Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.005061 4732 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.005090 4732 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616eedfe-830a-4ca8-9c42-a2cfd9352312-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.005100 4732 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.005131 4732 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.005141 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnkrx\" (UniqueName: \"kubernetes.io/projected/616eedfe-830a-4ca8-9c42-a2cfd9352312-kube-api-access-jnkrx\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.005149 4732 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616eedfe-830a-4ca8-9c42-a2cfd9352312-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.017390 4732 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.106918 4732 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.161733 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 31 09:23:31 crc kubenswrapper[4732]: I0131 09:23:31.165425 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.096809 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c"] Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.097020 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" podUID="4ef2e0aa-0782-4814-9d5f-9a6a32fb121b" containerName="manager" containerID="cri-o://613e26fde16172e6df7958455ae9310666a808f14ae1a1a48a42003388cc5dc1" gracePeriod=10 Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.314238 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-index-ps2mw"] Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.314718 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/barbican-operator-index-ps2mw" podUID="a6b3a350-8b41-44a0-a6b4-b957947e1df6" containerName="registry-server" containerID="cri-o://6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192" gracePeriod=30 Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.333618 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln"] Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.342373 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bcwsln"] Jan 31 09:23:32 crc kubenswrapper[4732]: E0131 09:23:32.384400 4732 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192 is running failed: container process not found" containerID="6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 09:23:32 crc kubenswrapper[4732]: E0131 09:23:32.384846 4732 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192 is running failed: container process not found" containerID="6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 09:23:32 crc kubenswrapper[4732]: E0131 09:23:32.385252 4732 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192 is running failed: container process not found" containerID="6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 09:23:32 crc kubenswrapper[4732]: E0131 09:23:32.385307 4732 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192 is running failed: container process not found" probeType="Readiness" pod="openstack-operators/barbican-operator-index-ps2mw" podUID="a6b3a350-8b41-44a0-a6b4-b957947e1df6" containerName="registry-server" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.553978 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0682a582-79d6-4286-9a43-e4a258dde73f" path="/var/lib/kubelet/pods/0682a582-79d6-4286-9a43-e4a258dde73f/volumes" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.554760 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a9d87a5-c953-483d-8183-0a0b8d4abac9" path="/var/lib/kubelet/pods/5a9d87a5-c953-483d-8183-0a0b8d4abac9/volumes" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.555589 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="616eedfe-830a-4ca8-9c42-a2cfd9352312" path="/var/lib/kubelet/pods/616eedfe-830a-4ca8-9c42-a2cfd9352312/volumes" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.568176 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.730876 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-webhook-cert\") pod \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\" (UID: \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\") " Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.731013 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwr5w\" (UniqueName: \"kubernetes.io/projected/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-kube-api-access-vwr5w\") pod \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\" (UID: \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\") " Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.731055 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-apiservice-cert\") pod \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\" (UID: \"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b\") " Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.738197 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-kube-api-access-vwr5w" (OuterVolumeSpecName: "kube-api-access-vwr5w") pod "4ef2e0aa-0782-4814-9d5f-9a6a32fb121b" (UID: "4ef2e0aa-0782-4814-9d5f-9a6a32fb121b"). InnerVolumeSpecName "kube-api-access-vwr5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.738594 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4ef2e0aa-0782-4814-9d5f-9a6a32fb121b" (UID: "4ef2e0aa-0782-4814-9d5f-9a6a32fb121b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.743971 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "4ef2e0aa-0782-4814-9d5f-9a6a32fb121b" (UID: "4ef2e0aa-0782-4814-9d5f-9a6a32fb121b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.799782 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-ps2mw" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.832449 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwr5w\" (UniqueName: \"kubernetes.io/projected/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-kube-api-access-vwr5w\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.832486 4732 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.832498 4732 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.856813 4732 generic.go:334] "Generic (PLEG): container finished" podID="4ef2e0aa-0782-4814-9d5f-9a6a32fb121b" containerID="613e26fde16172e6df7958455ae9310666a808f14ae1a1a48a42003388cc5dc1" exitCode=0 Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.856861 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" event={"ID":"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b","Type":"ContainerDied","Data":"613e26fde16172e6df7958455ae9310666a808f14ae1a1a48a42003388cc5dc1"} Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.856912 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" event={"ID":"4ef2e0aa-0782-4814-9d5f-9a6a32fb121b","Type":"ContainerDied","Data":"f54629922c72c3da7a2c23ec9c364e1132ebace0c630660cc9bfe34f06a31d4f"} Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.856932 4732 scope.go:117] "RemoveContainer" containerID="613e26fde16172e6df7958455ae9310666a808f14ae1a1a48a42003388cc5dc1" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.857477 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.859490 4732 generic.go:334] "Generic (PLEG): container finished" podID="a6b3a350-8b41-44a0-a6b4-b957947e1df6" containerID="6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192" exitCode=0 Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.859519 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-ps2mw" event={"ID":"a6b3a350-8b41-44a0-a6b4-b957947e1df6","Type":"ContainerDied","Data":"6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192"} Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.859546 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-ps2mw" event={"ID":"a6b3a350-8b41-44a0-a6b4-b957947e1df6","Type":"ContainerDied","Data":"a2b2ec2462bff27c657ef1956671a25bb4c7e593d5773eb486359c1043f19888"} Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.859601 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-ps2mw" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.877220 4732 scope.go:117] "RemoveContainer" containerID="613e26fde16172e6df7958455ae9310666a808f14ae1a1a48a42003388cc5dc1" Jan 31 09:23:32 crc kubenswrapper[4732]: E0131 09:23:32.878027 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613e26fde16172e6df7958455ae9310666a808f14ae1a1a48a42003388cc5dc1\": container with ID starting with 613e26fde16172e6df7958455ae9310666a808f14ae1a1a48a42003388cc5dc1 not found: ID does not exist" containerID="613e26fde16172e6df7958455ae9310666a808f14ae1a1a48a42003388cc5dc1" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.878137 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613e26fde16172e6df7958455ae9310666a808f14ae1a1a48a42003388cc5dc1"} err="failed to get container status \"613e26fde16172e6df7958455ae9310666a808f14ae1a1a48a42003388cc5dc1\": rpc error: code = NotFound desc = could not find container \"613e26fde16172e6df7958455ae9310666a808f14ae1a1a48a42003388cc5dc1\": container with ID starting with 613e26fde16172e6df7958455ae9310666a808f14ae1a1a48a42003388cc5dc1 not found: ID does not exist" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.878226 4732 scope.go:117] "RemoveContainer" containerID="6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.887278 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c"] Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.891654 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-665b569d9f-rjs9c"] Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.893994 4732 scope.go:117] "RemoveContainer" containerID="6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192" Jan 31 09:23:32 crc kubenswrapper[4732]: E0131 09:23:32.894372 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192\": container with ID starting with 6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192 not found: ID does not exist" containerID="6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.894414 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192"} err="failed to get container status \"6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192\": rpc error: code = NotFound desc = could not find container \"6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192\": container with ID starting with 6cf31f543f940bfb1886623400213cf6174f8dfa77f1c5bc43119a36a85bd192 not found: ID does not exist" Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.933509 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj79r\" (UniqueName: \"kubernetes.io/projected/a6b3a350-8b41-44a0-a6b4-b957947e1df6-kube-api-access-cj79r\") pod \"a6b3a350-8b41-44a0-a6b4-b957947e1df6\" (UID: \"a6b3a350-8b41-44a0-a6b4-b957947e1df6\") " Jan 31 09:23:32 crc kubenswrapper[4732]: I0131 09:23:32.936366 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b3a350-8b41-44a0-a6b4-b957947e1df6-kube-api-access-cj79r" (OuterVolumeSpecName: "kube-api-access-cj79r") pod "a6b3a350-8b41-44a0-a6b4-b957947e1df6" (UID: "a6b3a350-8b41-44a0-a6b4-b957947e1df6"). InnerVolumeSpecName "kube-api-access-cj79r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:33 crc kubenswrapper[4732]: I0131 09:23:33.036134 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj79r\" (UniqueName: \"kubernetes.io/projected/a6b3a350-8b41-44a0-a6b4-b957947e1df6-kube-api-access-cj79r\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:33 crc kubenswrapper[4732]: I0131 09:23:33.233727 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-index-ps2mw"] Jan 31 09:23:33 crc kubenswrapper[4732]: I0131 09:23:33.238599 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/barbican-operator-index-ps2mw"] Jan 31 09:23:34 crc kubenswrapper[4732]: E0131 09:23:34.453469 4732 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 09:23:34 crc kubenswrapper[4732]: E0131 09:23:34.453552 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts podName:1654407d-7276-4839-839d-1244759c4ad2 nodeName:}" failed. No retries permitted until 2026-01-31 09:23:42.453537958 +0000 UTC m=+1360.759414162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts") pod "keystone5b86-account-delete-qkt4p" (UID: "1654407d-7276-4839-839d-1244759c4ad2") : configmap "openstack-scripts" not found Jan 31 09:23:34 crc kubenswrapper[4732]: I0131 09:23:34.551629 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef2e0aa-0782-4814-9d5f-9a6a32fb121b" path="/var/lib/kubelet/pods/4ef2e0aa-0782-4814-9d5f-9a6a32fb121b/volumes" Jan 31 09:23:34 crc kubenswrapper[4732]: I0131 09:23:34.552376 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b3a350-8b41-44a0-a6b4-b957947e1df6" path="/var/lib/kubelet/pods/a6b3a350-8b41-44a0-a6b4-b957947e1df6/volumes" Jan 31 09:23:34 crc kubenswrapper[4732]: I0131 09:23:34.699742 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq"] Jan 31 09:23:34 crc kubenswrapper[4732]: I0131 09:23:34.699931 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" podUID="241eae26-3908-40e0-af9c-59b54a6ab1a0" containerName="manager" containerID="cri-o://33284c815fdcf867d64c94279b754de6fc3557337b45dc375a7c56112e2acde1" gracePeriod=10 Jan 31 09:23:34 crc kubenswrapper[4732]: I0131 09:23:34.877803 4732 generic.go:334] "Generic (PLEG): container finished" podID="241eae26-3908-40e0-af9c-59b54a6ab1a0" containerID="33284c815fdcf867d64c94279b754de6fc3557337b45dc375a7c56112e2acde1" exitCode=0 Jan 31 09:23:34 crc kubenswrapper[4732]: I0131 09:23:34.877978 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" event={"ID":"241eae26-3908-40e0-af9c-59b54a6ab1a0","Type":"ContainerDied","Data":"33284c815fdcf867d64c94279b754de6fc3557337b45dc375a7c56112e2acde1"} Jan 31 09:23:34 crc kubenswrapper[4732]: I0131 09:23:34.915182 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-hbpcb"] Jan 31 09:23:34 crc kubenswrapper[4732]: I0131 09:23:34.915366 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-hbpcb" podUID="e617e130-c338-40d1-9a5c-83e925a4e6ed" containerName="registry-server" containerID="cri-o://c4ac04f58cff4deda6adf78a5af0c30957b4f3ca033ed38c187c5a81f58e6b7d" gracePeriod=30 Jan 31 09:23:34 crc kubenswrapper[4732]: I0131 09:23:34.940654 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj"] Jan 31 09:23:34 crc kubenswrapper[4732]: I0131 09:23:34.945618 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvpfwj"] Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.121177 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.263419 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7m6t\" (UniqueName: \"kubernetes.io/projected/241eae26-3908-40e0-af9c-59b54a6ab1a0-kube-api-access-r7m6t\") pod \"241eae26-3908-40e0-af9c-59b54a6ab1a0\" (UID: \"241eae26-3908-40e0-af9c-59b54a6ab1a0\") " Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.263551 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/241eae26-3908-40e0-af9c-59b54a6ab1a0-webhook-cert\") pod \"241eae26-3908-40e0-af9c-59b54a6ab1a0\" (UID: \"241eae26-3908-40e0-af9c-59b54a6ab1a0\") " Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.263581 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/241eae26-3908-40e0-af9c-59b54a6ab1a0-apiservice-cert\") pod \"241eae26-3908-40e0-af9c-59b54a6ab1a0\" (UID: \"241eae26-3908-40e0-af9c-59b54a6ab1a0\") " Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.271489 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/241eae26-3908-40e0-af9c-59b54a6ab1a0-kube-api-access-r7m6t" (OuterVolumeSpecName: "kube-api-access-r7m6t") pod "241eae26-3908-40e0-af9c-59b54a6ab1a0" (UID: "241eae26-3908-40e0-af9c-59b54a6ab1a0"). InnerVolumeSpecName "kube-api-access-r7m6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.272226 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/241eae26-3908-40e0-af9c-59b54a6ab1a0-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "241eae26-3908-40e0-af9c-59b54a6ab1a0" (UID: "241eae26-3908-40e0-af9c-59b54a6ab1a0"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.278226 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/241eae26-3908-40e0-af9c-59b54a6ab1a0-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "241eae26-3908-40e0-af9c-59b54a6ab1a0" (UID: "241eae26-3908-40e0-af9c-59b54a6ab1a0"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.300227 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-hbpcb" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.368205 4732 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/241eae26-3908-40e0-af9c-59b54a6ab1a0-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.368233 4732 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/241eae26-3908-40e0-af9c-59b54a6ab1a0-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.368245 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7m6t\" (UniqueName: \"kubernetes.io/projected/241eae26-3908-40e0-af9c-59b54a6ab1a0-kube-api-access-r7m6t\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.469343 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnrxr\" (UniqueName: \"kubernetes.io/projected/e617e130-c338-40d1-9a5c-83e925a4e6ed-kube-api-access-cnrxr\") pod \"e617e130-c338-40d1-9a5c-83e925a4e6ed\" (UID: \"e617e130-c338-40d1-9a5c-83e925a4e6ed\") " Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.472352 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e617e130-c338-40d1-9a5c-83e925a4e6ed-kube-api-access-cnrxr" (OuterVolumeSpecName: "kube-api-access-cnrxr") pod "e617e130-c338-40d1-9a5c-83e925a4e6ed" (UID: "e617e130-c338-40d1-9a5c-83e925a4e6ed"). InnerVolumeSpecName "kube-api-access-cnrxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.571270 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnrxr\" (UniqueName: \"kubernetes.io/projected/e617e130-c338-40d1-9a5c-83e925a4e6ed-kube-api-access-cnrxr\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.888101 4732 generic.go:334] "Generic (PLEG): container finished" podID="e617e130-c338-40d1-9a5c-83e925a4e6ed" containerID="c4ac04f58cff4deda6adf78a5af0c30957b4f3ca033ed38c187c5a81f58e6b7d" exitCode=0 Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.888151 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-hbpcb" event={"ID":"e617e130-c338-40d1-9a5c-83e925a4e6ed","Type":"ContainerDied","Data":"c4ac04f58cff4deda6adf78a5af0c30957b4f3ca033ed38c187c5a81f58e6b7d"} Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.888200 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-hbpcb" event={"ID":"e617e130-c338-40d1-9a5c-83e925a4e6ed","Type":"ContainerDied","Data":"6b9559657e742c2dca3e80a6f30ed1e3ae9bae7e31be4ed1e6ca772141139c64"} Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.888201 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-hbpcb" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.888219 4732 scope.go:117] "RemoveContainer" containerID="c4ac04f58cff4deda6adf78a5af0c30957b4f3ca033ed38c187c5a81f58e6b7d" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.890044 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" event={"ID":"241eae26-3908-40e0-af9c-59b54a6ab1a0","Type":"ContainerDied","Data":"0967cf246a03d050c7636d21dd8ba1334433e9343155d38efb7a34b23ed4beb6"} Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.890122 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.936941 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-hbpcb"] Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.937360 4732 scope.go:117] "RemoveContainer" containerID="c4ac04f58cff4deda6adf78a5af0c30957b4f3ca033ed38c187c5a81f58e6b7d" Jan 31 09:23:35 crc kubenswrapper[4732]: E0131 09:23:35.937862 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ac04f58cff4deda6adf78a5af0c30957b4f3ca033ed38c187c5a81f58e6b7d\": container with ID starting with c4ac04f58cff4deda6adf78a5af0c30957b4f3ca033ed38c187c5a81f58e6b7d not found: ID does not exist" containerID="c4ac04f58cff4deda6adf78a5af0c30957b4f3ca033ed38c187c5a81f58e6b7d" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.937909 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ac04f58cff4deda6adf78a5af0c30957b4f3ca033ed38c187c5a81f58e6b7d"} err="failed to get container status \"c4ac04f58cff4deda6adf78a5af0c30957b4f3ca033ed38c187c5a81f58e6b7d\": rpc error: code = NotFound desc = could not find container \"c4ac04f58cff4deda6adf78a5af0c30957b4f3ca033ed38c187c5a81f58e6b7d\": container with ID starting with c4ac04f58cff4deda6adf78a5af0c30957b4f3ca033ed38c187c5a81f58e6b7d not found: ID does not exist" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.937938 4732 scope.go:117] "RemoveContainer" containerID="33284c815fdcf867d64c94279b754de6fc3557337b45dc375a7c56112e2acde1" Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.955099 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-hbpcb"] Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.966412 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq"] Jan 31 09:23:35 crc kubenswrapper[4732]: I0131 09:23:35.973751 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-57cdd4758c-lh9cq"] Jan 31 09:23:36 crc kubenswrapper[4732]: I0131 09:23:36.550705 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00bc065b-6932-4b27-bd33-5d8618f8a4f1" path="/var/lib/kubelet/pods/00bc065b-6932-4b27-bd33-5d8618f8a4f1/volumes" Jan 31 09:23:36 crc kubenswrapper[4732]: I0131 09:23:36.551801 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="241eae26-3908-40e0-af9c-59b54a6ab1a0" path="/var/lib/kubelet/pods/241eae26-3908-40e0-af9c-59b54a6ab1a0/volumes" Jan 31 09:23:36 crc kubenswrapper[4732]: I0131 09:23:36.552300 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e617e130-c338-40d1-9a5c-83e925a4e6ed" path="/var/lib/kubelet/pods/e617e130-c338-40d1-9a5c-83e925a4e6ed/volumes" Jan 31 09:23:36 crc kubenswrapper[4732]: I0131 09:23:36.918227 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954"] Jan 31 09:23:36 crc kubenswrapper[4732]: I0131 09:23:36.918451 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954" podUID="79621d02-e834-4725-8b80-d0444f3b6487" containerName="operator" containerID="cri-o://288dedb19f0678112de5555bcdb82232ae9f8593518e9e520e8756a6802abe9b" gracePeriod=10 Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.238217 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-hbtxq"] Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.238692 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" podUID="9ff1fd2b-a8cb-4050-9a54-3117be6964ce" containerName="registry-server" containerID="cri-o://fe9b798d0d4e674d4b68e134ab59cf425b1a6014efda517ffd63f1ccf40b11f0" gracePeriod=30 Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.266123 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl"] Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.270599 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sncrl"] Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.387311 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954" Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.509172 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztcb8\" (UniqueName: \"kubernetes.io/projected/79621d02-e834-4725-8b80-d0444f3b6487-kube-api-access-ztcb8\") pod \"79621d02-e834-4725-8b80-d0444f3b6487\" (UID: \"79621d02-e834-4725-8b80-d0444f3b6487\") " Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.528950 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79621d02-e834-4725-8b80-d0444f3b6487-kube-api-access-ztcb8" (OuterVolumeSpecName: "kube-api-access-ztcb8") pod "79621d02-e834-4725-8b80-d0444f3b6487" (UID: "79621d02-e834-4725-8b80-d0444f3b6487"). InnerVolumeSpecName "kube-api-access-ztcb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.610511 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztcb8\" (UniqueName: \"kubernetes.io/projected/79621d02-e834-4725-8b80-d0444f3b6487-kube-api-access-ztcb8\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.630709 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.812718 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdnc7\" (UniqueName: \"kubernetes.io/projected/9ff1fd2b-a8cb-4050-9a54-3117be6964ce-kube-api-access-tdnc7\") pod \"9ff1fd2b-a8cb-4050-9a54-3117be6964ce\" (UID: \"9ff1fd2b-a8cb-4050-9a54-3117be6964ce\") " Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.815473 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff1fd2b-a8cb-4050-9a54-3117be6964ce-kube-api-access-tdnc7" (OuterVolumeSpecName: "kube-api-access-tdnc7") pod "9ff1fd2b-a8cb-4050-9a54-3117be6964ce" (UID: "9ff1fd2b-a8cb-4050-9a54-3117be6964ce"). InnerVolumeSpecName "kube-api-access-tdnc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.913856 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdnc7\" (UniqueName: \"kubernetes.io/projected/9ff1fd2b-a8cb-4050-9a54-3117be6964ce-kube-api-access-tdnc7\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.914022 4732 generic.go:334] "Generic (PLEG): container finished" podID="79621d02-e834-4725-8b80-d0444f3b6487" containerID="288dedb19f0678112de5555bcdb82232ae9f8593518e9e520e8756a6802abe9b" exitCode=0 Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.914103 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954" Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.914095 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954" event={"ID":"79621d02-e834-4725-8b80-d0444f3b6487","Type":"ContainerDied","Data":"288dedb19f0678112de5555bcdb82232ae9f8593518e9e520e8756a6802abe9b"} Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.914142 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954" event={"ID":"79621d02-e834-4725-8b80-d0444f3b6487","Type":"ContainerDied","Data":"2cb30d5ff86683ad564f64402e0e4a2144f56764496f3cb2ead909cfbd0f5de4"} Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.914159 4732 scope.go:117] "RemoveContainer" containerID="288dedb19f0678112de5555bcdb82232ae9f8593518e9e520e8756a6802abe9b" Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.924986 4732 generic.go:334] "Generic (PLEG): container finished" podID="9ff1fd2b-a8cb-4050-9a54-3117be6964ce" containerID="fe9b798d0d4e674d4b68e134ab59cf425b1a6014efda517ffd63f1ccf40b11f0" exitCode=0 Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.925026 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" event={"ID":"9ff1fd2b-a8cb-4050-9a54-3117be6964ce","Type":"ContainerDied","Data":"fe9b798d0d4e674d4b68e134ab59cf425b1a6014efda517ffd63f1ccf40b11f0"} Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.925058 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" event={"ID":"9ff1fd2b-a8cb-4050-9a54-3117be6964ce","Type":"ContainerDied","Data":"5f27d35f0d0db33259c32fd8459b7826ea94c3c12989e06661b903a56351ab64"} Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.925054 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-hbtxq" Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.946575 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954"] Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.958578 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-2t954"] Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.963771 4732 scope.go:117] "RemoveContainer" containerID="288dedb19f0678112de5555bcdb82232ae9f8593518e9e520e8756a6802abe9b" Jan 31 09:23:37 crc kubenswrapper[4732]: E0131 09:23:37.964350 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"288dedb19f0678112de5555bcdb82232ae9f8593518e9e520e8756a6802abe9b\": container with ID starting with 288dedb19f0678112de5555bcdb82232ae9f8593518e9e520e8756a6802abe9b not found: ID does not exist" containerID="288dedb19f0678112de5555bcdb82232ae9f8593518e9e520e8756a6802abe9b" Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.964406 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288dedb19f0678112de5555bcdb82232ae9f8593518e9e520e8756a6802abe9b"} err="failed to get container status \"288dedb19f0678112de5555bcdb82232ae9f8593518e9e520e8756a6802abe9b\": rpc error: code = NotFound desc = could not find container \"288dedb19f0678112de5555bcdb82232ae9f8593518e9e520e8756a6802abe9b\": container with ID starting with 288dedb19f0678112de5555bcdb82232ae9f8593518e9e520e8756a6802abe9b not found: ID does not exist" Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.964440 4732 scope.go:117] "RemoveContainer" containerID="fe9b798d0d4e674d4b68e134ab59cf425b1a6014efda517ffd63f1ccf40b11f0" Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.965810 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-hbtxq"] Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.973737 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-hbtxq"] Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.986568 4732 scope.go:117] "RemoveContainer" containerID="fe9b798d0d4e674d4b68e134ab59cf425b1a6014efda517ffd63f1ccf40b11f0" Jan 31 09:23:37 crc kubenswrapper[4732]: E0131 09:23:37.987170 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9b798d0d4e674d4b68e134ab59cf425b1a6014efda517ffd63f1ccf40b11f0\": container with ID starting with fe9b798d0d4e674d4b68e134ab59cf425b1a6014efda517ffd63f1ccf40b11f0 not found: ID does not exist" containerID="fe9b798d0d4e674d4b68e134ab59cf425b1a6014efda517ffd63f1ccf40b11f0" Jan 31 09:23:37 crc kubenswrapper[4732]: I0131 09:23:37.987203 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9b798d0d4e674d4b68e134ab59cf425b1a6014efda517ffd63f1ccf40b11f0"} err="failed to get container status \"fe9b798d0d4e674d4b68e134ab59cf425b1a6014efda517ffd63f1ccf40b11f0\": rpc error: code = NotFound desc = could not find container \"fe9b798d0d4e674d4b68e134ab59cf425b1a6014efda517ffd63f1ccf40b11f0\": container with ID starting with fe9b798d0d4e674d4b68e134ab59cf425b1a6014efda517ffd63f1ccf40b11f0 not found: ID does not exist" Jan 31 09:23:38 crc kubenswrapper[4732]: I0131 09:23:38.551574 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79621d02-e834-4725-8b80-d0444f3b6487" path="/var/lib/kubelet/pods/79621d02-e834-4725-8b80-d0444f3b6487/volumes" Jan 31 09:23:38 crc kubenswrapper[4732]: I0131 09:23:38.552535 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86012593-15ec-4f3c-aaa4-c0522a918019" path="/var/lib/kubelet/pods/86012593-15ec-4f3c-aaa4-c0522a918019/volumes" Jan 31 09:23:38 crc kubenswrapper[4732]: I0131 09:23:38.553303 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ff1fd2b-a8cb-4050-9a54-3117be6964ce" path="/var/lib/kubelet/pods/9ff1fd2b-a8cb-4050-9a54-3117be6964ce/volumes" Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.264686 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms"] Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.265228 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" podUID="415e013e-ff2f-47b6-a17d-c0ba8f80071a" containerName="manager" containerID="cri-o://b7c1c08d282d712f9e0a65331b190ff2ebed938312c6f27e081d2971e6619b22" gracePeriod=10 Jan 31 09:23:42 crc kubenswrapper[4732]: E0131 09:23:42.477158 4732 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 09:23:42 crc kubenswrapper[4732]: E0131 09:23:42.477477 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts podName:1654407d-7276-4839-839d-1244759c4ad2 nodeName:}" failed. No retries permitted until 2026-01-31 09:23:58.477457971 +0000 UTC m=+1376.783334175 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts") pod "keystone5b86-account-delete-qkt4p" (UID: "1654407d-7276-4839-839d-1244759c4ad2") : configmap "openstack-scripts" not found Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.551077 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-lbfxz"] Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.551305 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-lbfxz" podUID="0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf" containerName="registry-server" containerID="cri-o://910c1830295490a33aa5bee975620416b3ee2fad21058cc6bdfc3765079fd37f" gracePeriod=30 Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.593209 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg"] Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.598965 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576bchhg"] Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.785068 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.883949 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/415e013e-ff2f-47b6-a17d-c0ba8f80071a-apiservice-cert\") pod \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\" (UID: \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\") " Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.884093 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/415e013e-ff2f-47b6-a17d-c0ba8f80071a-webhook-cert\") pod \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\" (UID: \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\") " Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.884317 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp7fg\" (UniqueName: \"kubernetes.io/projected/415e013e-ff2f-47b6-a17d-c0ba8f80071a-kube-api-access-vp7fg\") pod \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\" (UID: \"415e013e-ff2f-47b6-a17d-c0ba8f80071a\") " Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.895890 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/415e013e-ff2f-47b6-a17d-c0ba8f80071a-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "415e013e-ff2f-47b6-a17d-c0ba8f80071a" (UID: "415e013e-ff2f-47b6-a17d-c0ba8f80071a"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.895932 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/415e013e-ff2f-47b6-a17d-c0ba8f80071a-kube-api-access-vp7fg" (OuterVolumeSpecName: "kube-api-access-vp7fg") pod "415e013e-ff2f-47b6-a17d-c0ba8f80071a" (UID: "415e013e-ff2f-47b6-a17d-c0ba8f80071a"). InnerVolumeSpecName "kube-api-access-vp7fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.896465 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/415e013e-ff2f-47b6-a17d-c0ba8f80071a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "415e013e-ff2f-47b6-a17d-c0ba8f80071a" (UID: "415e013e-ff2f-47b6-a17d-c0ba8f80071a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.924482 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-lbfxz" Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.984194 4732 generic.go:334] "Generic (PLEG): container finished" podID="415e013e-ff2f-47b6-a17d-c0ba8f80071a" containerID="b7c1c08d282d712f9e0a65331b190ff2ebed938312c6f27e081d2971e6619b22" exitCode=0 Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.984250 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" event={"ID":"415e013e-ff2f-47b6-a17d-c0ba8f80071a","Type":"ContainerDied","Data":"b7c1c08d282d712f9e0a65331b190ff2ebed938312c6f27e081d2971e6619b22"} Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.984288 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" event={"ID":"415e013e-ff2f-47b6-a17d-c0ba8f80071a","Type":"ContainerDied","Data":"ba8a3b878f56e9627430ea250d8fbd2f4a60f48d4ce391e390485cb7a6931e6c"} Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.984305 4732 scope.go:117] "RemoveContainer" containerID="b7c1c08d282d712f9e0a65331b190ff2ebed938312c6f27e081d2971e6619b22" Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.984407 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms" Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.986137 4732 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/415e013e-ff2f-47b6-a17d-c0ba8f80071a-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.986165 4732 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/415e013e-ff2f-47b6-a17d-c0ba8f80071a-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.986178 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp7fg\" (UniqueName: \"kubernetes.io/projected/415e013e-ff2f-47b6-a17d-c0ba8f80071a-kube-api-access-vp7fg\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.988552 4732 generic.go:334] "Generic (PLEG): container finished" podID="0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf" containerID="910c1830295490a33aa5bee975620416b3ee2fad21058cc6bdfc3765079fd37f" exitCode=0 Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.988592 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-lbfxz" event={"ID":"0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf","Type":"ContainerDied","Data":"910c1830295490a33aa5bee975620416b3ee2fad21058cc6bdfc3765079fd37f"} Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.988622 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-lbfxz" event={"ID":"0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf","Type":"ContainerDied","Data":"28d499c0e0c3ca32837e0d3a623958320049a25bd1e23f61b0a33ef2f6ce6116"} Jan 31 09:23:42 crc kubenswrapper[4732]: I0131 09:23:42.988624 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-lbfxz" Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.016180 4732 scope.go:117] "RemoveContainer" containerID="b7c1c08d282d712f9e0a65331b190ff2ebed938312c6f27e081d2971e6619b22" Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.017155 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms"] Jan 31 09:23:43 crc kubenswrapper[4732]: E0131 09:23:43.017204 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7c1c08d282d712f9e0a65331b190ff2ebed938312c6f27e081d2971e6619b22\": container with ID starting with b7c1c08d282d712f9e0a65331b190ff2ebed938312c6f27e081d2971e6619b22 not found: ID does not exist" containerID="b7c1c08d282d712f9e0a65331b190ff2ebed938312c6f27e081d2971e6619b22" Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.017280 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7c1c08d282d712f9e0a65331b190ff2ebed938312c6f27e081d2971e6619b22"} err="failed to get container status \"b7c1c08d282d712f9e0a65331b190ff2ebed938312c6f27e081d2971e6619b22\": rpc error: code = NotFound desc = could not find container \"b7c1c08d282d712f9e0a65331b190ff2ebed938312c6f27e081d2971e6619b22\": container with ID starting with b7c1c08d282d712f9e0a65331b190ff2ebed938312c6f27e081d2971e6619b22 not found: ID does not exist" Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.017313 4732 scope.go:117] "RemoveContainer" containerID="910c1830295490a33aa5bee975620416b3ee2fad21058cc6bdfc3765079fd37f" Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.022390 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7f868546f6-qkfms"] Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.038100 4732 scope.go:117] "RemoveContainer" containerID="910c1830295490a33aa5bee975620416b3ee2fad21058cc6bdfc3765079fd37f" Jan 31 09:23:43 crc kubenswrapper[4732]: E0131 09:23:43.038473 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"910c1830295490a33aa5bee975620416b3ee2fad21058cc6bdfc3765079fd37f\": container with ID starting with 910c1830295490a33aa5bee975620416b3ee2fad21058cc6bdfc3765079fd37f not found: ID does not exist" containerID="910c1830295490a33aa5bee975620416b3ee2fad21058cc6bdfc3765079fd37f" Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.038497 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"910c1830295490a33aa5bee975620416b3ee2fad21058cc6bdfc3765079fd37f"} err="failed to get container status \"910c1830295490a33aa5bee975620416b3ee2fad21058cc6bdfc3765079fd37f\": rpc error: code = NotFound desc = could not find container \"910c1830295490a33aa5bee975620416b3ee2fad21058cc6bdfc3765079fd37f\": container with ID starting with 910c1830295490a33aa5bee975620416b3ee2fad21058cc6bdfc3765079fd37f not found: ID does not exist" Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.089308 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgj7k\" (UniqueName: \"kubernetes.io/projected/0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf-kube-api-access-jgj7k\") pod \"0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf\" (UID: \"0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf\") " Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.093529 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf-kube-api-access-jgj7k" (OuterVolumeSpecName: "kube-api-access-jgj7k") pod "0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf" (UID: "0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf"). InnerVolumeSpecName "kube-api-access-jgj7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.190729 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgj7k\" (UniqueName: \"kubernetes.io/projected/0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf-kube-api-access-jgj7k\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.319706 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-lbfxz"] Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.332466 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-lbfxz"] Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.907069 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq"] Jan 31 09:23:43 crc kubenswrapper[4732]: I0131 09:23:43.907617 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" podUID="088a3743-a071-4b0e-9cd8-66271eaeafdb" containerName="manager" containerID="cri-o://8d293f0ee732a310f5bc8795228ce580921819546f87301342b14a14ba38ea62" gracePeriod=10 Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.095713 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-b545g"] Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.095943 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-b545g" podUID="e502d767-ed18-4540-8d2c-ffd993e4822d" containerName="registry-server" containerID="cri-o://e0a2d107fb0ab5da52d563b9ed97ddc9f7ce9c51dfb332362a03b85f4c2acfe7" gracePeriod=30 Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.130943 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664"] Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.135886 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40gl664"] Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.337375 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.510015 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jb9w\" (UniqueName: \"kubernetes.io/projected/088a3743-a071-4b0e-9cd8-66271eaeafdb-kube-api-access-9jb9w\") pod \"088a3743-a071-4b0e-9cd8-66271eaeafdb\" (UID: \"088a3743-a071-4b0e-9cd8-66271eaeafdb\") " Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.510078 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/088a3743-a071-4b0e-9cd8-66271eaeafdb-apiservice-cert\") pod \"088a3743-a071-4b0e-9cd8-66271eaeafdb\" (UID: \"088a3743-a071-4b0e-9cd8-66271eaeafdb\") " Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.510156 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/088a3743-a071-4b0e-9cd8-66271eaeafdb-webhook-cert\") pod \"088a3743-a071-4b0e-9cd8-66271eaeafdb\" (UID: \"088a3743-a071-4b0e-9cd8-66271eaeafdb\") " Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.515455 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/088a3743-a071-4b0e-9cd8-66271eaeafdb-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "088a3743-a071-4b0e-9cd8-66271eaeafdb" (UID: "088a3743-a071-4b0e-9cd8-66271eaeafdb"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.515592 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/088a3743-a071-4b0e-9cd8-66271eaeafdb-kube-api-access-9jb9w" (OuterVolumeSpecName: "kube-api-access-9jb9w") pod "088a3743-a071-4b0e-9cd8-66271eaeafdb" (UID: "088a3743-a071-4b0e-9cd8-66271eaeafdb"). InnerVolumeSpecName "kube-api-access-9jb9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.527432 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/088a3743-a071-4b0e-9cd8-66271eaeafdb-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "088a3743-a071-4b0e-9cd8-66271eaeafdb" (UID: "088a3743-a071-4b0e-9cd8-66271eaeafdb"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.552964 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf" path="/var/lib/kubelet/pods/0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf/volumes" Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.553583 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2996ae7b-aedb-4a67-a98e-b1a466347be0" path="/var/lib/kubelet/pods/2996ae7b-aedb-4a67-a98e-b1a466347be0/volumes" Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.554384 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="415e013e-ff2f-47b6-a17d-c0ba8f80071a" path="/var/lib/kubelet/pods/415e013e-ff2f-47b6-a17d-c0ba8f80071a/volumes" Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.555722 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a" path="/var/lib/kubelet/pods/f5ae42fb-86af-4a2d-9570-b3be9f3f8f4a/volumes" Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.564827 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-b545g" Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.611552 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jb9w\" (UniqueName: \"kubernetes.io/projected/088a3743-a071-4b0e-9cd8-66271eaeafdb-kube-api-access-9jb9w\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.611836 4732 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/088a3743-a071-4b0e-9cd8-66271eaeafdb-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.611925 4732 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/088a3743-a071-4b0e-9cd8-66271eaeafdb-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.712369 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl5c8\" (UniqueName: \"kubernetes.io/projected/e502d767-ed18-4540-8d2c-ffd993e4822d-kube-api-access-jl5c8\") pod \"e502d767-ed18-4540-8d2c-ffd993e4822d\" (UID: \"e502d767-ed18-4540-8d2c-ffd993e4822d\") " Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.718877 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e502d767-ed18-4540-8d2c-ffd993e4822d-kube-api-access-jl5c8" (OuterVolumeSpecName: "kube-api-access-jl5c8") pod "e502d767-ed18-4540-8d2c-ffd993e4822d" (UID: "e502d767-ed18-4540-8d2c-ffd993e4822d"). InnerVolumeSpecName "kube-api-access-jl5c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:23:44 crc kubenswrapper[4732]: I0131 09:23:44.814419 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl5c8\" (UniqueName: \"kubernetes.io/projected/e502d767-ed18-4540-8d2c-ffd993e4822d-kube-api-access-jl5c8\") on node \"crc\" DevicePath \"\"" Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.009411 4732 generic.go:334] "Generic (PLEG): container finished" podID="088a3743-a071-4b0e-9cd8-66271eaeafdb" containerID="8d293f0ee732a310f5bc8795228ce580921819546f87301342b14a14ba38ea62" exitCode=0 Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.009493 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" event={"ID":"088a3743-a071-4b0e-9cd8-66271eaeafdb","Type":"ContainerDied","Data":"8d293f0ee732a310f5bc8795228ce580921819546f87301342b14a14ba38ea62"} Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.009526 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" event={"ID":"088a3743-a071-4b0e-9cd8-66271eaeafdb","Type":"ContainerDied","Data":"afc983ab3b957756283c86e705571f21643e9da88c0bed95193b050d0c42bb03"} Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.009576 4732 scope.go:117] "RemoveContainer" containerID="8d293f0ee732a310f5bc8795228ce580921819546f87301342b14a14ba38ea62" Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.009771 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq" Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.014182 4732 generic.go:334] "Generic (PLEG): container finished" podID="e502d767-ed18-4540-8d2c-ffd993e4822d" containerID="e0a2d107fb0ab5da52d563b9ed97ddc9f7ce9c51dfb332362a03b85f4c2acfe7" exitCode=0 Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.014231 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-b545g" event={"ID":"e502d767-ed18-4540-8d2c-ffd993e4822d","Type":"ContainerDied","Data":"e0a2d107fb0ab5da52d563b9ed97ddc9f7ce9c51dfb332362a03b85f4c2acfe7"} Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.014263 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-b545g" event={"ID":"e502d767-ed18-4540-8d2c-ffd993e4822d","Type":"ContainerDied","Data":"e336ec36e7b92b98ffa9cd023af81299e0d2a21836b58f3096392afd27981f20"} Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.014275 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-b545g" Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.032306 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq"] Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.039067 4732 scope.go:117] "RemoveContainer" containerID="8d293f0ee732a310f5bc8795228ce580921819546f87301342b14a14ba38ea62" Jan 31 09:23:45 crc kubenswrapper[4732]: E0131 09:23:45.039679 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d293f0ee732a310f5bc8795228ce580921819546f87301342b14a14ba38ea62\": container with ID starting with 8d293f0ee732a310f5bc8795228ce580921819546f87301342b14a14ba38ea62 not found: ID does not exist" containerID="8d293f0ee732a310f5bc8795228ce580921819546f87301342b14a14ba38ea62" Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.039711 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-665d897fbd-rcqsq"] Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.039716 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d293f0ee732a310f5bc8795228ce580921819546f87301342b14a14ba38ea62"} err="failed to get container status \"8d293f0ee732a310f5bc8795228ce580921819546f87301342b14a14ba38ea62\": rpc error: code = NotFound desc = could not find container \"8d293f0ee732a310f5bc8795228ce580921819546f87301342b14a14ba38ea62\": container with ID starting with 8d293f0ee732a310f5bc8795228ce580921819546f87301342b14a14ba38ea62 not found: ID does not exist" Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.039816 4732 scope.go:117] "RemoveContainer" containerID="e0a2d107fb0ab5da52d563b9ed97ddc9f7ce9c51dfb332362a03b85f4c2acfe7" Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.061874 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-b545g"] Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.065155 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-b545g"] Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.068202 4732 scope.go:117] "RemoveContainer" containerID="e0a2d107fb0ab5da52d563b9ed97ddc9f7ce9c51dfb332362a03b85f4c2acfe7" Jan 31 09:23:45 crc kubenswrapper[4732]: E0131 09:23:45.068648 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a2d107fb0ab5da52d563b9ed97ddc9f7ce9c51dfb332362a03b85f4c2acfe7\": container with ID starting with e0a2d107fb0ab5da52d563b9ed97ddc9f7ce9c51dfb332362a03b85f4c2acfe7 not found: ID does not exist" containerID="e0a2d107fb0ab5da52d563b9ed97ddc9f7ce9c51dfb332362a03b85f4c2acfe7" Jan 31 09:23:45 crc kubenswrapper[4732]: I0131 09:23:45.068704 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a2d107fb0ab5da52d563b9ed97ddc9f7ce9c51dfb332362a03b85f4c2acfe7"} err="failed to get container status \"e0a2d107fb0ab5da52d563b9ed97ddc9f7ce9c51dfb332362a03b85f4c2acfe7\": rpc error: code = NotFound desc = could not find container \"e0a2d107fb0ab5da52d563b9ed97ddc9f7ce9c51dfb332362a03b85f4c2acfe7\": container with ID starting with e0a2d107fb0ab5da52d563b9ed97ddc9f7ce9c51dfb332362a03b85f4c2acfe7 not found: ID does not exist" Jan 31 09:23:46 crc kubenswrapper[4732]: I0131 09:23:46.551137 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="088a3743-a071-4b0e-9cd8-66271eaeafdb" path="/var/lib/kubelet/pods/088a3743-a071-4b0e-9cd8-66271eaeafdb/volumes" Jan 31 09:23:46 crc kubenswrapper[4732]: I0131 09:23:46.552202 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e502d767-ed18-4540-8d2c-ffd993e4822d" path="/var/lib/kubelet/pods/e502d767-ed18-4540-8d2c-ffd993e4822d/volumes" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.511245 4732 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.512025 4732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts podName:1654407d-7276-4839-839d-1244759c4ad2 nodeName:}" failed. No retries permitted until 2026-01-31 09:24:30.51199475 +0000 UTC m=+1408.817870984 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts") pod "keystone5b86-account-delete-qkt4p" (UID: "1654407d-7276-4839-839d-1244759c4ad2") : configmap "openstack-scripts" not found Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.576775 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4v4x4/must-gather-8vtn6"] Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577193 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" containerName="setup-container" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577214 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" containerName="setup-container" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577240 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7eb0179-b292-4a09-a07d-3d9bfe7978f3" containerName="mysql-bootstrap" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577250 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7eb0179-b292-4a09-a07d-3d9bfe7978f3" containerName="mysql-bootstrap" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577270 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0682a582-79d6-4286-9a43-e4a258dde73f" containerName="galera" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577279 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="0682a582-79d6-4286-9a43-e4a258dde73f" containerName="galera" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577307 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff1fd2b-a8cb-4050-9a54-3117be6964ce" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577316 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff1fd2b-a8cb-4050-9a54-3117be6964ce" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577333 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e617e130-c338-40d1-9a5c-83e925a4e6ed" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577341 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="e617e130-c338-40d1-9a5c-83e925a4e6ed" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577354 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e502d767-ed18-4540-8d2c-ffd993e4822d" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577362 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="e502d767-ed18-4540-8d2c-ffd993e4822d" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577381 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" containerName="rabbitmq" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577390 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" containerName="rabbitmq" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577415 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b1cc40-9985-45d8-bb06-0676ff188c6c" containerName="barbican-worker-log" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577424 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b1cc40-9985-45d8-bb06-0676ff188c6c" containerName="barbican-worker-log" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577442 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="241eae26-3908-40e0-af9c-59b54a6ab1a0" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577450 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="241eae26-3908-40e0-af9c-59b54a6ab1a0" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577471 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577479 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577493 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577502 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577521 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d4fa62-a33c-4ab2-a446-697994c1541e" containerName="memcached" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577550 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d4fa62-a33c-4ab2-a446-697994c1541e" containerName="memcached" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577578 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7eb0179-b292-4a09-a07d-3d9bfe7978f3" containerName="galera" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577587 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7eb0179-b292-4a09-a07d-3d9bfe7978f3" containerName="galera" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577603 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b3a350-8b41-44a0-a6b4-b957947e1df6" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577611 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b3a350-8b41-44a0-a6b4-b957947e1df6" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577621 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d550c8-968a-4962-9e23-c0c22911913d" containerName="mariadb-account-delete" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577629 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d550c8-968a-4962-9e23-c0c22911913d" containerName="mariadb-account-delete" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577649 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef2e0aa-0782-4814-9d5f-9a6a32fb121b" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577657 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef2e0aa-0782-4814-9d5f-9a6a32fb121b" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577695 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415e013e-ff2f-47b6-a17d-c0ba8f80071a" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577708 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="415e013e-ff2f-47b6-a17d-c0ba8f80071a" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577726 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4981d9a9-898f-49ff-809d-58c7ca3bd2a3" containerName="barbican-api-log" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577736 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="4981d9a9-898f-49ff-809d-58c7ca3bd2a3" containerName="barbican-api-log" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577757 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616eedfe-830a-4ca8-9c42-a2cfd9352312" containerName="galera" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577766 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="616eedfe-830a-4ca8-9c42-a2cfd9352312" containerName="galera" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577785 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b1cc40-9985-45d8-bb06-0676ff188c6c" containerName="barbican-worker" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577794 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b1cc40-9985-45d8-bb06-0676ff188c6c" containerName="barbican-worker" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577817 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee84530-efd7-4d83-9aa2-fb9b8b178496" containerName="keystone-api" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577826 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee84530-efd7-4d83-9aa2-fb9b8b178496" containerName="keystone-api" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577844 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="088a3743-a071-4b0e-9cd8-66271eaeafdb" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577853 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="088a3743-a071-4b0e-9cd8-66271eaeafdb" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577863 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f93e90-1e9a-439c-a130-487ebf54ad10" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577871 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f93e90-1e9a-439c-a130-487ebf54ad10" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577894 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616eedfe-830a-4ca8-9c42-a2cfd9352312" containerName="mysql-bootstrap" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577902 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="616eedfe-830a-4ca8-9c42-a2cfd9352312" containerName="mysql-bootstrap" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577920 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4981d9a9-898f-49ff-809d-58c7ca3bd2a3" containerName="barbican-api" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577930 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="4981d9a9-898f-49ff-809d-58c7ca3bd2a3" containerName="barbican-api" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577947 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0682a582-79d6-4286-9a43-e4a258dde73f" containerName="mysql-bootstrap" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577955 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="0682a582-79d6-4286-9a43-e4a258dde73f" containerName="mysql-bootstrap" Jan 31 09:23:58 crc kubenswrapper[4732]: E0131 09:23:58.577975 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79621d02-e834-4725-8b80-d0444f3b6487" containerName="operator" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.577988 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="79621d02-e834-4725-8b80-d0444f3b6487" containerName="operator" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578217 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b1cc40-9985-45d8-bb06-0676ff188c6c" containerName="barbican-worker-log" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578229 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b3a350-8b41-44a0-a6b4-b957947e1df6" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578238 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a4e6dfd-8108-4ccb-8f6b-0d81bb1adebf" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578249 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7eb0179-b292-4a09-a07d-3d9bfe7978f3" containerName="galera" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578263 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d4fa62-a33c-4ab2-a446-697994c1541e" containerName="memcached" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578282 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d550c8-968a-4962-9e23-c0c22911913d" containerName="mariadb-account-delete" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578293 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="241eae26-3908-40e0-af9c-59b54a6ab1a0" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578308 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="24dbbc5e-c4aa-4f6a-b6e6-52b3013443cf" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578319 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="79621d02-e834-4725-8b80-d0444f3b6487" containerName="operator" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578337 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f93e90-1e9a-439c-a130-487ebf54ad10" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578351 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef2e0aa-0782-4814-9d5f-9a6a32fb121b" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578363 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="e617e130-c338-40d1-9a5c-83e925a4e6ed" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578379 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc71a61f-ccf2-43bb-aedb-71f2ec9f03bd" containerName="rabbitmq" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578396 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="e502d767-ed18-4540-8d2c-ffd993e4822d" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578413 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="088a3743-a071-4b0e-9cd8-66271eaeafdb" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578427 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b1cc40-9985-45d8-bb06-0676ff188c6c" containerName="barbican-worker" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578443 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="616eedfe-830a-4ca8-9c42-a2cfd9352312" containerName="galera" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578454 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="0682a582-79d6-4286-9a43-e4a258dde73f" containerName="galera" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578469 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee84530-efd7-4d83-9aa2-fb9b8b178496" containerName="keystone-api" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578485 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff1fd2b-a8cb-4050-9a54-3117be6964ce" containerName="registry-server" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578496 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="415e013e-ff2f-47b6-a17d-c0ba8f80071a" containerName="manager" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578514 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="4981d9a9-898f-49ff-809d-58c7ca3bd2a3" containerName="barbican-api-log" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.578531 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="4981d9a9-898f-49ff-809d-58c7ca3bd2a3" containerName="barbican-api" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.585872 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4v4x4/must-gather-8vtn6" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.593430 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4v4x4"/"kube-root-ca.crt" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.594054 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4v4x4"/"default-dockercfg-hnm9l" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.594242 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4v4x4"/"openshift-service-ca.crt" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.600051 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4v4x4/must-gather-8vtn6"] Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.616811 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkkgd\" (UniqueName: \"kubernetes.io/projected/efcd56a7-a326-43ac-8d3e-c1a2fc2a464f-kube-api-access-bkkgd\") pod \"must-gather-8vtn6\" (UID: \"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f\") " pod="openshift-must-gather-4v4x4/must-gather-8vtn6" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.616888 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/efcd56a7-a326-43ac-8d3e-c1a2fc2a464f-must-gather-output\") pod \"must-gather-8vtn6\" (UID: \"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f\") " pod="openshift-must-gather-4v4x4/must-gather-8vtn6" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.719006 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkkgd\" (UniqueName: \"kubernetes.io/projected/efcd56a7-a326-43ac-8d3e-c1a2fc2a464f-kube-api-access-bkkgd\") pod \"must-gather-8vtn6\" (UID: \"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f\") " pod="openshift-must-gather-4v4x4/must-gather-8vtn6" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.719075 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/efcd56a7-a326-43ac-8d3e-c1a2fc2a464f-must-gather-output\") pod \"must-gather-8vtn6\" (UID: \"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f\") " pod="openshift-must-gather-4v4x4/must-gather-8vtn6" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.719741 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/efcd56a7-a326-43ac-8d3e-c1a2fc2a464f-must-gather-output\") pod \"must-gather-8vtn6\" (UID: \"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f\") " pod="openshift-must-gather-4v4x4/must-gather-8vtn6" Jan 31 09:23:58 crc kubenswrapper[4732]: I0131 09:23:58.741204 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkkgd\" (UniqueName: \"kubernetes.io/projected/efcd56a7-a326-43ac-8d3e-c1a2fc2a464f-kube-api-access-bkkgd\") pod \"must-gather-8vtn6\" (UID: \"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f\") " pod="openshift-must-gather-4v4x4/must-gather-8vtn6" Jan 31 09:23:59 crc kubenswrapper[4732]: I0131 09:23:59.084067 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4v4x4/must-gather-8vtn6" Jan 31 09:23:59 crc kubenswrapper[4732]: I0131 09:23:59.497434 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4v4x4/must-gather-8vtn6"] Jan 31 09:23:59 crc kubenswrapper[4732]: W0131 09:23:59.510384 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefcd56a7_a326_43ac_8d3e_c1a2fc2a464f.slice/crio-361c00622edab92e61d2b8d67ab19b41602e8592ca012dc381342782152982d7 WatchSource:0}: Error finding container 361c00622edab92e61d2b8d67ab19b41602e8592ca012dc381342782152982d7: Status 404 returned error can't find the container with id 361c00622edab92e61d2b8d67ab19b41602e8592ca012dc381342782152982d7 Jan 31 09:23:59 crc kubenswrapper[4732]: I0131 09:23:59.513275 4732 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:24:00 crc kubenswrapper[4732]: I0131 09:24:00.139712 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4v4x4/must-gather-8vtn6" event={"ID":"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f","Type":"ContainerStarted","Data":"361c00622edab92e61d2b8d67ab19b41602e8592ca012dc381342782152982d7"} Jan 31 09:24:00 crc kubenswrapper[4732]: I0131 09:24:00.462149 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" Jan 31 09:24:00 crc kubenswrapper[4732]: I0131 09:24:00.604007 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssz6z\" (UniqueName: \"kubernetes.io/projected/1654407d-7276-4839-839d-1244759c4ad2-kube-api-access-ssz6z\") pod \"1654407d-7276-4839-839d-1244759c4ad2\" (UID: \"1654407d-7276-4839-839d-1244759c4ad2\") " Jan 31 09:24:00 crc kubenswrapper[4732]: I0131 09:24:00.604086 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts\") pod \"1654407d-7276-4839-839d-1244759c4ad2\" (UID: \"1654407d-7276-4839-839d-1244759c4ad2\") " Jan 31 09:24:00 crc kubenswrapper[4732]: I0131 09:24:00.604950 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1654407d-7276-4839-839d-1244759c4ad2" (UID: "1654407d-7276-4839-839d-1244759c4ad2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:24:00 crc kubenswrapper[4732]: I0131 09:24:00.619019 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1654407d-7276-4839-839d-1244759c4ad2-kube-api-access-ssz6z" (OuterVolumeSpecName: "kube-api-access-ssz6z") pod "1654407d-7276-4839-839d-1244759c4ad2" (UID: "1654407d-7276-4839-839d-1244759c4ad2"). InnerVolumeSpecName "kube-api-access-ssz6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:24:00 crc kubenswrapper[4732]: I0131 09:24:00.705600 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssz6z\" (UniqueName: \"kubernetes.io/projected/1654407d-7276-4839-839d-1244759c4ad2-kube-api-access-ssz6z\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:00 crc kubenswrapper[4732]: I0131 09:24:00.705649 4732 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1654407d-7276-4839-839d-1244759c4ad2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 09:24:01 crc kubenswrapper[4732]: I0131 09:24:01.152267 4732 generic.go:334] "Generic (PLEG): container finished" podID="1654407d-7276-4839-839d-1244759c4ad2" containerID="79fac5d2960d6f9193efd13767f68129541f3654fafcc49472d1b08351983069" exitCode=137 Jan 31 09:24:01 crc kubenswrapper[4732]: I0131 09:24:01.152306 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" event={"ID":"1654407d-7276-4839-839d-1244759c4ad2","Type":"ContainerDied","Data":"79fac5d2960d6f9193efd13767f68129541f3654fafcc49472d1b08351983069"} Jan 31 09:24:01 crc kubenswrapper[4732]: I0131 09:24:01.152329 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" event={"ID":"1654407d-7276-4839-839d-1244759c4ad2","Type":"ContainerDied","Data":"7d6e2907463735b69f40b7370fe0069cdb743cf23d1f9aeb0278bee0ffa6f8b0"} Jan 31 09:24:01 crc kubenswrapper[4732]: I0131 09:24:01.152338 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone5b86-account-delete-qkt4p" Jan 31 09:24:01 crc kubenswrapper[4732]: I0131 09:24:01.152345 4732 scope.go:117] "RemoveContainer" containerID="79fac5d2960d6f9193efd13767f68129541f3654fafcc49472d1b08351983069" Jan 31 09:24:01 crc kubenswrapper[4732]: I0131 09:24:01.168499 4732 scope.go:117] "RemoveContainer" containerID="79fac5d2960d6f9193efd13767f68129541f3654fafcc49472d1b08351983069" Jan 31 09:24:01 crc kubenswrapper[4732]: E0131 09:24:01.168859 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79fac5d2960d6f9193efd13767f68129541f3654fafcc49472d1b08351983069\": container with ID starting with 79fac5d2960d6f9193efd13767f68129541f3654fafcc49472d1b08351983069 not found: ID does not exist" containerID="79fac5d2960d6f9193efd13767f68129541f3654fafcc49472d1b08351983069" Jan 31 09:24:01 crc kubenswrapper[4732]: I0131 09:24:01.168888 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fac5d2960d6f9193efd13767f68129541f3654fafcc49472d1b08351983069"} err="failed to get container status \"79fac5d2960d6f9193efd13767f68129541f3654fafcc49472d1b08351983069\": rpc error: code = NotFound desc = could not find container \"79fac5d2960d6f9193efd13767f68129541f3654fafcc49472d1b08351983069\": container with ID starting with 79fac5d2960d6f9193efd13767f68129541f3654fafcc49472d1b08351983069 not found: ID does not exist" Jan 31 09:24:01 crc kubenswrapper[4732]: I0131 09:24:01.187264 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone5b86-account-delete-qkt4p"] Jan 31 09:24:01 crc kubenswrapper[4732]: I0131 09:24:01.193217 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone5b86-account-delete-qkt4p"] Jan 31 09:24:02 crc kubenswrapper[4732]: I0131 09:24:02.549295 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1654407d-7276-4839-839d-1244759c4ad2" path="/var/lib/kubelet/pods/1654407d-7276-4839-839d-1244759c4ad2/volumes" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.181351 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4v4x4/must-gather-8vtn6" event={"ID":"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f","Type":"ContainerStarted","Data":"f88858a19d913954a48096ea1df3d5c79e72769bf4ad9e1829233f60113d0854"} Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.181734 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4v4x4/must-gather-8vtn6" event={"ID":"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f","Type":"ContainerStarted","Data":"3c1329e6bf2dfb982396045a78d5b23bda418913a472d6b57856329382998b59"} Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.200138 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4v4x4/must-gather-8vtn6" podStartSLOduration=2.671481815 podStartE2EDuration="6.200119267s" podCreationTimestamp="2026-01-31 09:23:58 +0000 UTC" firstStartedPulling="2026-01-31 09:23:59.512449275 +0000 UTC m=+1377.818325479" lastFinishedPulling="2026-01-31 09:24:03.041086707 +0000 UTC m=+1381.346962931" observedRunningTime="2026-01-31 09:24:04.19505293 +0000 UTC m=+1382.500929164" watchObservedRunningTime="2026-01-31 09:24:04.200119267 +0000 UTC m=+1382.505995471" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.310958 4732 scope.go:117] "RemoveContainer" containerID="fecab232c8eea055dcb26c6531c6bec4d55c835988b0fcd4b2a823ee72e633d6" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.345248 4732 scope.go:117] "RemoveContainer" containerID="8c02dfae66fd3a4f5b4725177171b29a50b8756093929474b400347824b72530" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.368161 4732 scope.go:117] "RemoveContainer" containerID="f344ba5284e12746a74007d5adc8aa42e8ee48a5c3588cb47db5117f3c83e9f0" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.389272 4732 scope.go:117] "RemoveContainer" containerID="05dc77d92e7403bddf9c4216d3f9e9927975c83db475a4592b0fe670adbee115" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.417139 4732 scope.go:117] "RemoveContainer" containerID="90c15d5ab54a0813e658db668ab02d84908650e855cd0db9d18d9e71000d2b80" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.441606 4732 scope.go:117] "RemoveContainer" containerID="c9325e278cfd071d34b0338c159d59fb628048d898065c645b245c3766ab28e7" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.474894 4732 scope.go:117] "RemoveContainer" containerID="9e931ecd0e73f528e7ed41d0a730c73d763f5b49adf745bec9e51e72d6012d62" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.496281 4732 scope.go:117] "RemoveContainer" containerID="61b0a7c08542a13be8ba2b2cae47287416caa788b605c317245cad2aa213e84a" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.513485 4732 scope.go:117] "RemoveContainer" containerID="fe93f24b1b1ca8fe4c671105ac0fbeb376225843e31d3614ced260876fce3216" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.531528 4732 scope.go:117] "RemoveContainer" containerID="c0a354876c3578c412d4a67e39a312dc83add567ea01562dcdd52716d48fc242" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.566812 4732 scope.go:117] "RemoveContainer" containerID="a86707b4de5b35e9337d2beb5deb8d861dead56d2dc194d07926deea0b9a63f5" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.586482 4732 scope.go:117] "RemoveContainer" containerID="a890dd33402e340e3c5622818af3229b97a815440f243204f47c951187c50dbe" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.631374 4732 scope.go:117] "RemoveContainer" containerID="09dedf44a2244d0aa2f52cb5add6cbbb78a1014d9997dba9dbd36b2416c85acf" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.671321 4732 scope.go:117] "RemoveContainer" containerID="34e2cf0f4161fbb85eb0df7049a2e4f5c156fd2607d987c6bef858acc9fffa46" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.690637 4732 scope.go:117] "RemoveContainer" containerID="443c8d984e9ef1e7b308f390e27b580b627477703c6492f50cd0b13f5a986a0a" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.707651 4732 scope.go:117] "RemoveContainer" containerID="5797783d92fce2d40396a5e635eb5cba9fcbde2d8555d22fc23d15fbbb6337bb" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.721071 4732 scope.go:117] "RemoveContainer" containerID="47dfd350e39daf4b8f033d97a0ce4f52541999043f126251274978479fb72c51" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.735561 4732 scope.go:117] "RemoveContainer" containerID="e8c730305c3c39eb9242a11659fb49c02690f7b2e9c98fd70ae0da288de53e06" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.750844 4732 scope.go:117] "RemoveContainer" containerID="87f7881198e2bf8721939abbe73ccd62c3cbc8681018136ac024bf795609a719" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.764611 4732 scope.go:117] "RemoveContainer" containerID="381b993ee77c54308c8a7301a323a8aafc300c3e038f7b063863ce23feb4f43f" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.782431 4732 scope.go:117] "RemoveContainer" containerID="a5bd2e1a25f92dbcf8d8999d3428639a204c1c32490db7803f3573205b07d825" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.796889 4732 scope.go:117] "RemoveContainer" containerID="6e1beb997853403436be77394dbddcd82093917a762178f04e455efdc50a1f83" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.813017 4732 scope.go:117] "RemoveContainer" containerID="7a57753b9625c7248d75214d3dccdb94e632617b9f7c21be3bc87c9177d4ca52" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.830971 4732 scope.go:117] "RemoveContainer" containerID="75bdd5fe1a4f1b74d5a4dbcdde8f474e6bc05519374a21ed6a1e8f88735183f3" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.856293 4732 scope.go:117] "RemoveContainer" containerID="4b3511c94c5ac6d2254541268344c305851e16ace80321bcfccadaad9af571e5" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.883923 4732 scope.go:117] "RemoveContainer" containerID="a37225baa762efba0da855108c4e3b4942f286a7d0c711d1300d72e14f4f3a67" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.901823 4732 scope.go:117] "RemoveContainer" containerID="d5192da51c40249f5f8c71160cc71f8db81535a29bd24997e64ba0998bfe2e66" Jan 31 09:24:04 crc kubenswrapper[4732]: I0131 09:24:04.918542 4732 scope.go:117] "RemoveContainer" containerID="dddea144384fd97086387399ccac7f5e8ab364f8aa2e9d3b1d93f34363e9cf4d" Jan 31 09:24:53 crc kubenswrapper[4732]: I0131 09:24:53.214047 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-v69mc_498d64fc-0d0f-43c6-aaae-bd3c5f0d7873/control-plane-machine-set-operator/0.log" Jan 31 09:24:53 crc kubenswrapper[4732]: I0131 09:24:53.379428 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-54nxd_e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a/kube-rbac-proxy/0.log" Jan 31 09:24:53 crc kubenswrapper[4732]: I0131 09:24:53.553015 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-54nxd_e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a/machine-api-operator/0.log" Jan 31 09:25:05 crc kubenswrapper[4732]: I0131 09:25:05.348486 4732 scope.go:117] "RemoveContainer" containerID="4202ea6b721143639a1a8ee464d8d295428596e358fa14506e850632ab62de19" Jan 31 09:25:17 crc kubenswrapper[4732]: I0131 09:25:17.497428 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:25:17 crc kubenswrapper[4732]: I0131 09:25:17.498132 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:25:20 crc kubenswrapper[4732]: I0131 09:25:20.443798 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-jq8g8_53b5272f-ac5c-4616-a427-28fc830d7392/controller/0.log" Jan 31 09:25:20 crc kubenswrapper[4732]: I0131 09:25:20.444039 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-jq8g8_53b5272f-ac5c-4616-a427-28fc830d7392/kube-rbac-proxy/0.log" Jan 31 09:25:20 crc kubenswrapper[4732]: I0131 09:25:20.570084 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-frr-files/0.log" Jan 31 09:25:20 crc kubenswrapper[4732]: I0131 09:25:20.742242 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-reloader/0.log" Jan 31 09:25:20 crc kubenswrapper[4732]: I0131 09:25:20.746186 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-reloader/0.log" Jan 31 09:25:20 crc kubenswrapper[4732]: I0131 09:25:20.776060 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-frr-files/0.log" Jan 31 09:25:20 crc kubenswrapper[4732]: I0131 09:25:20.777543 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-metrics/0.log" Jan 31 09:25:20 crc kubenswrapper[4732]: I0131 09:25:20.899805 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-reloader/0.log" Jan 31 09:25:20 crc kubenswrapper[4732]: I0131 09:25:20.919324 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-frr-files/0.log" Jan 31 09:25:20 crc kubenswrapper[4732]: I0131 09:25:20.992487 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-metrics/0.log" Jan 31 09:25:20 crc kubenswrapper[4732]: I0131 09:25:20.999490 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-metrics/0.log" Jan 31 09:25:21 crc kubenswrapper[4732]: I0131 09:25:21.094026 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-frr-files/0.log" Jan 31 09:25:21 crc kubenswrapper[4732]: I0131 09:25:21.121292 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-reloader/0.log" Jan 31 09:25:21 crc kubenswrapper[4732]: I0131 09:25:21.159779 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/controller/0.log" Jan 31 09:25:21 crc kubenswrapper[4732]: I0131 09:25:21.160446 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-metrics/0.log" Jan 31 09:25:21 crc kubenswrapper[4732]: I0131 09:25:21.291009 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/frr-metrics/0.log" Jan 31 09:25:21 crc kubenswrapper[4732]: I0131 09:25:21.351178 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/kube-rbac-proxy/0.log" Jan 31 09:25:21 crc kubenswrapper[4732]: I0131 09:25:21.364618 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/kube-rbac-proxy-frr/0.log" Jan 31 09:25:21 crc kubenswrapper[4732]: I0131 09:25:21.466979 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/reloader/0.log" Jan 31 09:25:21 crc kubenswrapper[4732]: I0131 09:25:21.580145 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-5l2kt_4b09b4ac-95c1-4c31-99a0-12b38c3412ae/frr-k8s-webhook-server/0.log" Jan 31 09:25:21 crc kubenswrapper[4732]: I0131 09:25:21.742054 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6d8dc66c8b-8p2mn_8ca218dd-0d42-45c8-b4e4-ca638781c915/manager/0.log" Jan 31 09:25:21 crc kubenswrapper[4732]: I0131 09:25:21.801462 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-c98699f55-6bzdf_62f950f6-2a18-4ca6-8cdb-75f47437053a/webhook-server/0.log" Jan 31 09:25:21 crc kubenswrapper[4732]: I0131 09:25:21.805991 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/frr/0.log" Jan 31 09:25:21 crc kubenswrapper[4732]: I0131 09:25:21.939131 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gcmq2_3fbb0c82-6b72-4313-94e2-3e71d27cf75f/kube-rbac-proxy/0.log" Jan 31 09:25:22 crc kubenswrapper[4732]: I0131 09:25:22.117491 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gcmq2_3fbb0c82-6b72-4313-94e2-3e71d27cf75f/speaker/0.log" Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.600720 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nntc6"] Jan 31 09:25:37 crc kubenswrapper[4732]: E0131 09:25:37.601646 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1654407d-7276-4839-839d-1244759c4ad2" containerName="mariadb-account-delete" Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.601694 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="1654407d-7276-4839-839d-1244759c4ad2" containerName="mariadb-account-delete" Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.601901 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="1654407d-7276-4839-839d-1244759c4ad2" containerName="mariadb-account-delete" Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.603204 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.608136 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nntc6"] Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.803400 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-catalog-content\") pod \"redhat-marketplace-nntc6\" (UID: \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\") " pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.803928 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-utilities\") pod \"redhat-marketplace-nntc6\" (UID: \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\") " pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.804037 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xppp\" (UniqueName: \"kubernetes.io/projected/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-kube-api-access-7xppp\") pod \"redhat-marketplace-nntc6\" (UID: \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\") " pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.905180 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xppp\" (UniqueName: \"kubernetes.io/projected/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-kube-api-access-7xppp\") pod \"redhat-marketplace-nntc6\" (UID: \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\") " pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.905261 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-catalog-content\") pod \"redhat-marketplace-nntc6\" (UID: \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\") " pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.905302 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-utilities\") pod \"redhat-marketplace-nntc6\" (UID: \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\") " pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.905786 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-utilities\") pod \"redhat-marketplace-nntc6\" (UID: \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\") " pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.905888 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-catalog-content\") pod \"redhat-marketplace-nntc6\" (UID: \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\") " pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:37 crc kubenswrapper[4732]: I0131 09:25:37.929486 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xppp\" (UniqueName: \"kubernetes.io/projected/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-kube-api-access-7xppp\") pod \"redhat-marketplace-nntc6\" (UID: \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\") " pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:38 crc kubenswrapper[4732]: I0131 09:25:38.218280 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:38 crc kubenswrapper[4732]: I0131 09:25:38.660021 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nntc6"] Jan 31 09:25:38 crc kubenswrapper[4732]: I0131 09:25:38.744463 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nntc6" event={"ID":"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f","Type":"ContainerStarted","Data":"59a992697fbb9e4ba41759d7f9425dc054a231d68556433c6fe3cf106dbc6b83"} Jan 31 09:25:38 crc kubenswrapper[4732]: E0131 09:25:38.866993 4732 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e6555ab_3f72_4b8f_a2d6_56f918da2b5f.slice/crio-conmon-18db5903c8c2eb2a0f15e145c4028d64509fbc52c15e7320b0f4849da6f6b657.scope\": RecentStats: unable to find data in memory cache]" Jan 31 09:25:39 crc kubenswrapper[4732]: I0131 09:25:39.753326 4732 generic.go:334] "Generic (PLEG): container finished" podID="7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" containerID="18db5903c8c2eb2a0f15e145c4028d64509fbc52c15e7320b0f4849da6f6b657" exitCode=0 Jan 31 09:25:39 crc kubenswrapper[4732]: I0131 09:25:39.753372 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nntc6" event={"ID":"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f","Type":"ContainerDied","Data":"18db5903c8c2eb2a0f15e145c4028d64509fbc52c15e7320b0f4849da6f6b657"} Jan 31 09:25:41 crc kubenswrapper[4732]: I0131 09:25:41.763396 4732 generic.go:334] "Generic (PLEG): container finished" podID="7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" containerID="1b61c8f45b6d0b67474b1d360231736bc021a78f046b98c44defe0796d04c620" exitCode=0 Jan 31 09:25:41 crc kubenswrapper[4732]: I0131 09:25:41.763452 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nntc6" event={"ID":"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f","Type":"ContainerDied","Data":"1b61c8f45b6d0b67474b1d360231736bc021a78f046b98c44defe0796d04c620"} Jan 31 09:25:42 crc kubenswrapper[4732]: I0131 09:25:42.771629 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nntc6" event={"ID":"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f","Type":"ContainerStarted","Data":"6cd3e6a33b07ecf405db08db769e081eb479c7bf7936a8343db50d45e05dd007"} Jan 31 09:25:42 crc kubenswrapper[4732]: I0131 09:25:42.795090 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nntc6" podStartSLOduration=3.117546243 podStartE2EDuration="5.795065267s" podCreationTimestamp="2026-01-31 09:25:37 +0000 UTC" firstStartedPulling="2026-01-31 09:25:39.75495787 +0000 UTC m=+1478.060834074" lastFinishedPulling="2026-01-31 09:25:42.432476894 +0000 UTC m=+1480.738353098" observedRunningTime="2026-01-31 09:25:42.790226677 +0000 UTC m=+1481.096102881" watchObservedRunningTime="2026-01-31 09:25:42.795065267 +0000 UTC m=+1481.100941501" Jan 31 09:25:45 crc kubenswrapper[4732]: I0131 09:25:45.830514 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/util/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.024284 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/pull/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.037304 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/util/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.043481 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/pull/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.229859 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/extract/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.232581 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/pull/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.260399 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/util/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.367932 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/extract-utilities/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.527435 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/extract-utilities/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.533025 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/extract-content/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.539734 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/extract-content/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.681900 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/extract-utilities/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.685426 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/extract-content/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.912064 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/extract-utilities/0.log" Jan 31 09:25:46 crc kubenswrapper[4732]: I0131 09:25:46.942787 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/registry-server/0.log" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.073255 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/extract-content/0.log" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.109832 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/extract-utilities/0.log" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.122812 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/extract-content/0.log" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.286722 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/extract-utilities/0.log" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.333311 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/extract-content/0.log" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.487399 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8bchs_d0dbfc52-f4e9-462a-a253-2bb950c04e7b/marketplace-operator/0.log" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.497563 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.497685 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.581540 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/extract-utilities/0.log" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.698373 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/registry-server/0.log" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.758839 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/extract-content/0.log" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.766615 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/extract-content/0.log" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.781853 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/extract-utilities/0.log" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.918748 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/extract-utilities/0.log" Jan 31 09:25:47 crc kubenswrapper[4732]: I0131 09:25:47.921002 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/extract-content/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.032728 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/registry-server/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.111828 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nntc6_7e6555ab-3f72-4b8f-a2d6-56f918da2b5f/extract-utilities/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.218674 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.218719 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.225933 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nntc6_7e6555ab-3f72-4b8f-a2d6-56f918da2b5f/extract-utilities/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.237748 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nntc6_7e6555ab-3f72-4b8f-a2d6-56f918da2b5f/extract-content/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.261142 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nntc6_7e6555ab-3f72-4b8f-a2d6-56f918da2b5f/extract-content/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.263411 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.443951 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nntc6_7e6555ab-3f72-4b8f-a2d6-56f918da2b5f/extract-utilities/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.445273 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nntc6_7e6555ab-3f72-4b8f-a2d6-56f918da2b5f/extract-content/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.464415 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nntc6_7e6555ab-3f72-4b8f-a2d6-56f918da2b5f/registry-server/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.606389 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/extract-utilities/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.746240 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/extract-utilities/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.762895 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/extract-content/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.774778 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/extract-content/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.849044 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.889201 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nntc6"] Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.936745 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/extract-utilities/0.log" Jan 31 09:25:48 crc kubenswrapper[4732]: I0131 09:25:48.979521 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/extract-content/0.log" Jan 31 09:25:49 crc kubenswrapper[4732]: I0131 09:25:49.328251 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/registry-server/0.log" Jan 31 09:25:50 crc kubenswrapper[4732]: I0131 09:25:50.817106 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nntc6" podUID="7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" containerName="registry-server" containerID="cri-o://6cd3e6a33b07ecf405db08db769e081eb479c7bf7936a8343db50d45e05dd007" gracePeriod=2 Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.305334 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.491632 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-catalog-content\") pod \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\" (UID: \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\") " Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.498308 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xppp\" (UniqueName: \"kubernetes.io/projected/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-kube-api-access-7xppp\") pod \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\" (UID: \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\") " Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.498450 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-utilities\") pod \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\" (UID: \"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f\") " Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.499889 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-utilities" (OuterVolumeSpecName: "utilities") pod "7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" (UID: "7e6555ab-3f72-4b8f-a2d6-56f918da2b5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.506789 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-kube-api-access-7xppp" (OuterVolumeSpecName: "kube-api-access-7xppp") pod "7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" (UID: "7e6555ab-3f72-4b8f-a2d6-56f918da2b5f"). InnerVolumeSpecName "kube-api-access-7xppp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.518629 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" (UID: "7e6555ab-3f72-4b8f-a2d6-56f918da2b5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.600367 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.600425 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.600441 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xppp\" (UniqueName: \"kubernetes.io/projected/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f-kube-api-access-7xppp\") on node \"crc\" DevicePath \"\"" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.829141 4732 generic.go:334] "Generic (PLEG): container finished" podID="7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" containerID="6cd3e6a33b07ecf405db08db769e081eb479c7bf7936a8343db50d45e05dd007" exitCode=0 Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.829196 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nntc6" event={"ID":"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f","Type":"ContainerDied","Data":"6cd3e6a33b07ecf405db08db769e081eb479c7bf7936a8343db50d45e05dd007"} Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.829225 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nntc6" event={"ID":"7e6555ab-3f72-4b8f-a2d6-56f918da2b5f","Type":"ContainerDied","Data":"59a992697fbb9e4ba41759d7f9425dc054a231d68556433c6fe3cf106dbc6b83"} Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.829245 4732 scope.go:117] "RemoveContainer" containerID="6cd3e6a33b07ecf405db08db769e081eb479c7bf7936a8343db50d45e05dd007" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.829399 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nntc6" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.869759 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nntc6"] Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.876652 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nntc6"] Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.892046 4732 scope.go:117] "RemoveContainer" containerID="1b61c8f45b6d0b67474b1d360231736bc021a78f046b98c44defe0796d04c620" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.913192 4732 scope.go:117] "RemoveContainer" containerID="18db5903c8c2eb2a0f15e145c4028d64509fbc52c15e7320b0f4849da6f6b657" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.939962 4732 scope.go:117] "RemoveContainer" containerID="6cd3e6a33b07ecf405db08db769e081eb479c7bf7936a8343db50d45e05dd007" Jan 31 09:25:51 crc kubenswrapper[4732]: E0131 09:25:51.940651 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd3e6a33b07ecf405db08db769e081eb479c7bf7936a8343db50d45e05dd007\": container with ID starting with 6cd3e6a33b07ecf405db08db769e081eb479c7bf7936a8343db50d45e05dd007 not found: ID does not exist" containerID="6cd3e6a33b07ecf405db08db769e081eb479c7bf7936a8343db50d45e05dd007" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.940728 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd3e6a33b07ecf405db08db769e081eb479c7bf7936a8343db50d45e05dd007"} err="failed to get container status \"6cd3e6a33b07ecf405db08db769e081eb479c7bf7936a8343db50d45e05dd007\": rpc error: code = NotFound desc = could not find container \"6cd3e6a33b07ecf405db08db769e081eb479c7bf7936a8343db50d45e05dd007\": container with ID starting with 6cd3e6a33b07ecf405db08db769e081eb479c7bf7936a8343db50d45e05dd007 not found: ID does not exist" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.940777 4732 scope.go:117] "RemoveContainer" containerID="1b61c8f45b6d0b67474b1d360231736bc021a78f046b98c44defe0796d04c620" Jan 31 09:25:51 crc kubenswrapper[4732]: E0131 09:25:51.942912 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b61c8f45b6d0b67474b1d360231736bc021a78f046b98c44defe0796d04c620\": container with ID starting with 1b61c8f45b6d0b67474b1d360231736bc021a78f046b98c44defe0796d04c620 not found: ID does not exist" containerID="1b61c8f45b6d0b67474b1d360231736bc021a78f046b98c44defe0796d04c620" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.942970 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b61c8f45b6d0b67474b1d360231736bc021a78f046b98c44defe0796d04c620"} err="failed to get container status \"1b61c8f45b6d0b67474b1d360231736bc021a78f046b98c44defe0796d04c620\": rpc error: code = NotFound desc = could not find container \"1b61c8f45b6d0b67474b1d360231736bc021a78f046b98c44defe0796d04c620\": container with ID starting with 1b61c8f45b6d0b67474b1d360231736bc021a78f046b98c44defe0796d04c620 not found: ID does not exist" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.943012 4732 scope.go:117] "RemoveContainer" containerID="18db5903c8c2eb2a0f15e145c4028d64509fbc52c15e7320b0f4849da6f6b657" Jan 31 09:25:51 crc kubenswrapper[4732]: E0131 09:25:51.943410 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18db5903c8c2eb2a0f15e145c4028d64509fbc52c15e7320b0f4849da6f6b657\": container with ID starting with 18db5903c8c2eb2a0f15e145c4028d64509fbc52c15e7320b0f4849da6f6b657 not found: ID does not exist" containerID="18db5903c8c2eb2a0f15e145c4028d64509fbc52c15e7320b0f4849da6f6b657" Jan 31 09:25:51 crc kubenswrapper[4732]: I0131 09:25:51.943442 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18db5903c8c2eb2a0f15e145c4028d64509fbc52c15e7320b0f4849da6f6b657"} err="failed to get container status \"18db5903c8c2eb2a0f15e145c4028d64509fbc52c15e7320b0f4849da6f6b657\": rpc error: code = NotFound desc = could not find container \"18db5903c8c2eb2a0f15e145c4028d64509fbc52c15e7320b0f4849da6f6b657\": container with ID starting with 18db5903c8c2eb2a0f15e145c4028d64509fbc52c15e7320b0f4849da6f6b657 not found: ID does not exist" Jan 31 09:25:52 crc kubenswrapper[4732]: I0131 09:25:52.551489 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" path="/var/lib/kubelet/pods/7e6555ab-3f72-4b8f-a2d6-56f918da2b5f/volumes" Jan 31 09:26:05 crc kubenswrapper[4732]: I0131 09:26:05.404937 4732 scope.go:117] "RemoveContainer" containerID="2bf5f760920241f10f11d255f66394a5366cad9af6f5cf262601f44403aa7939" Jan 31 09:26:07 crc kubenswrapper[4732]: I0131 09:26:07.816021 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-656dg"] Jan 31 09:26:07 crc kubenswrapper[4732]: E0131 09:26:07.816276 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" containerName="extract-utilities" Jan 31 09:26:07 crc kubenswrapper[4732]: I0131 09:26:07.816292 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" containerName="extract-utilities" Jan 31 09:26:07 crc kubenswrapper[4732]: E0131 09:26:07.816325 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" containerName="registry-server" Jan 31 09:26:07 crc kubenswrapper[4732]: I0131 09:26:07.816333 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" containerName="registry-server" Jan 31 09:26:07 crc kubenswrapper[4732]: E0131 09:26:07.816346 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" containerName="extract-content" Jan 31 09:26:07 crc kubenswrapper[4732]: I0131 09:26:07.816354 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" containerName="extract-content" Jan 31 09:26:07 crc kubenswrapper[4732]: I0131 09:26:07.816473 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6555ab-3f72-4b8f-a2d6-56f918da2b5f" containerName="registry-server" Jan 31 09:26:07 crc kubenswrapper[4732]: I0131 09:26:07.817608 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:07 crc kubenswrapper[4732]: I0131 09:26:07.830559 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-656dg"] Jan 31 09:26:07 crc kubenswrapper[4732]: I0131 09:26:07.909911 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e12f87a-681b-4268-a759-5a0043ce9b74-utilities\") pod \"redhat-operators-656dg\" (UID: \"2e12f87a-681b-4268-a759-5a0043ce9b74\") " pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:07 crc kubenswrapper[4732]: I0131 09:26:07.910027 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll2cw\" (UniqueName: \"kubernetes.io/projected/2e12f87a-681b-4268-a759-5a0043ce9b74-kube-api-access-ll2cw\") pod \"redhat-operators-656dg\" (UID: \"2e12f87a-681b-4268-a759-5a0043ce9b74\") " pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:07 crc kubenswrapper[4732]: I0131 09:26:07.910122 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e12f87a-681b-4268-a759-5a0043ce9b74-catalog-content\") pod \"redhat-operators-656dg\" (UID: \"2e12f87a-681b-4268-a759-5a0043ce9b74\") " pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:08 crc kubenswrapper[4732]: I0131 09:26:08.010890 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e12f87a-681b-4268-a759-5a0043ce9b74-utilities\") pod \"redhat-operators-656dg\" (UID: \"2e12f87a-681b-4268-a759-5a0043ce9b74\") " pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:08 crc kubenswrapper[4732]: I0131 09:26:08.010946 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll2cw\" (UniqueName: \"kubernetes.io/projected/2e12f87a-681b-4268-a759-5a0043ce9b74-kube-api-access-ll2cw\") pod \"redhat-operators-656dg\" (UID: \"2e12f87a-681b-4268-a759-5a0043ce9b74\") " pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:08 crc kubenswrapper[4732]: I0131 09:26:08.010984 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e12f87a-681b-4268-a759-5a0043ce9b74-catalog-content\") pod \"redhat-operators-656dg\" (UID: \"2e12f87a-681b-4268-a759-5a0043ce9b74\") " pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:08 crc kubenswrapper[4732]: I0131 09:26:08.011559 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e12f87a-681b-4268-a759-5a0043ce9b74-catalog-content\") pod \"redhat-operators-656dg\" (UID: \"2e12f87a-681b-4268-a759-5a0043ce9b74\") " pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:08 crc kubenswrapper[4732]: I0131 09:26:08.011571 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e12f87a-681b-4268-a759-5a0043ce9b74-utilities\") pod \"redhat-operators-656dg\" (UID: \"2e12f87a-681b-4268-a759-5a0043ce9b74\") " pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:08 crc kubenswrapper[4732]: I0131 09:26:08.036054 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll2cw\" (UniqueName: \"kubernetes.io/projected/2e12f87a-681b-4268-a759-5a0043ce9b74-kube-api-access-ll2cw\") pod \"redhat-operators-656dg\" (UID: \"2e12f87a-681b-4268-a759-5a0043ce9b74\") " pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:08 crc kubenswrapper[4732]: I0131 09:26:08.136117 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:08 crc kubenswrapper[4732]: I0131 09:26:08.328297 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-656dg"] Jan 31 09:26:08 crc kubenswrapper[4732]: I0131 09:26:08.937566 4732 generic.go:334] "Generic (PLEG): container finished" podID="2e12f87a-681b-4268-a759-5a0043ce9b74" containerID="c66b5574a75190dfa43ca1b2cdb9dfc8ae21408d31afedb9d3fc37979ae65b8e" exitCode=0 Jan 31 09:26:08 crc kubenswrapper[4732]: I0131 09:26:08.937616 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-656dg" event={"ID":"2e12f87a-681b-4268-a759-5a0043ce9b74","Type":"ContainerDied","Data":"c66b5574a75190dfa43ca1b2cdb9dfc8ae21408d31afedb9d3fc37979ae65b8e"} Jan 31 09:26:08 crc kubenswrapper[4732]: I0131 09:26:08.937644 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-656dg" event={"ID":"2e12f87a-681b-4268-a759-5a0043ce9b74","Type":"ContainerStarted","Data":"29b157830fbef40d781d40bed5e3826609c906a763e5b72b90cee5d6d8cea24c"} Jan 31 09:26:09 crc kubenswrapper[4732]: I0131 09:26:09.944838 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-656dg" event={"ID":"2e12f87a-681b-4268-a759-5a0043ce9b74","Type":"ContainerStarted","Data":"7bc43bc96f778289060427cb473897dee11bab3838f8edfd6f142f8bdc222057"} Jan 31 09:26:10 crc kubenswrapper[4732]: I0131 09:26:10.957102 4732 generic.go:334] "Generic (PLEG): container finished" podID="2e12f87a-681b-4268-a759-5a0043ce9b74" containerID="7bc43bc96f778289060427cb473897dee11bab3838f8edfd6f142f8bdc222057" exitCode=0 Jan 31 09:26:10 crc kubenswrapper[4732]: I0131 09:26:10.957555 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-656dg" event={"ID":"2e12f87a-681b-4268-a759-5a0043ce9b74","Type":"ContainerDied","Data":"7bc43bc96f778289060427cb473897dee11bab3838f8edfd6f142f8bdc222057"} Jan 31 09:26:11 crc kubenswrapper[4732]: I0131 09:26:11.968464 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-656dg" event={"ID":"2e12f87a-681b-4268-a759-5a0043ce9b74","Type":"ContainerStarted","Data":"d895ab26fd561f11dab8cd4d8001e029f55ee41384747dfd0582712824be9183"} Jan 31 09:26:11 crc kubenswrapper[4732]: I0131 09:26:11.996987 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-656dg" podStartSLOduration=2.4980721790000002 podStartE2EDuration="4.996968407s" podCreationTimestamp="2026-01-31 09:26:07 +0000 UTC" firstStartedPulling="2026-01-31 09:26:08.939425417 +0000 UTC m=+1507.245301631" lastFinishedPulling="2026-01-31 09:26:11.438321665 +0000 UTC m=+1509.744197859" observedRunningTime="2026-01-31 09:26:11.989679841 +0000 UTC m=+1510.295556045" watchObservedRunningTime="2026-01-31 09:26:11.996968407 +0000 UTC m=+1510.302844611" Jan 31 09:26:17 crc kubenswrapper[4732]: I0131 09:26:17.497784 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:26:17 crc kubenswrapper[4732]: I0131 09:26:17.498187 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:26:17 crc kubenswrapper[4732]: I0131 09:26:17.498255 4732 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:26:17 crc kubenswrapper[4732]: I0131 09:26:17.499106 4732 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77"} pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:26:17 crc kubenswrapper[4732]: I0131 09:26:17.499218 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" containerID="cri-o://3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" gracePeriod=600 Jan 31 09:26:18 crc kubenswrapper[4732]: I0131 09:26:18.136643 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:18 crc kubenswrapper[4732]: I0131 09:26:18.136716 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:18 crc kubenswrapper[4732]: I0131 09:26:18.204890 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:18 crc kubenswrapper[4732]: E0131 09:26:18.867836 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:26:19 crc kubenswrapper[4732]: I0131 09:26:19.012630 4732 generic.go:334] "Generic (PLEG): container finished" podID="7d790207-d357-4b47-87bf-5b505e061820" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" exitCode=0 Jan 31 09:26:19 crc kubenswrapper[4732]: I0131 09:26:19.012733 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerDied","Data":"3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77"} Jan 31 09:26:19 crc kubenswrapper[4732]: I0131 09:26:19.012822 4732 scope.go:117] "RemoveContainer" containerID="777b6bb11b5556f90e1c2a08822928a50217112bcd9efce47de0d5e1a98e3392" Jan 31 09:26:19 crc kubenswrapper[4732]: I0131 09:26:19.013792 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:26:19 crc kubenswrapper[4732]: E0131 09:26:19.014171 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:26:19 crc kubenswrapper[4732]: I0131 09:26:19.082430 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:19 crc kubenswrapper[4732]: I0131 09:26:19.122516 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-656dg"] Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.025676 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-656dg" podUID="2e12f87a-681b-4268-a759-5a0043ce9b74" containerName="registry-server" containerID="cri-o://d895ab26fd561f11dab8cd4d8001e029f55ee41384747dfd0582712824be9183" gracePeriod=2 Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.271513 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xt2x6"] Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.272538 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.293777 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xt2x6"] Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.294763 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9264cbfe-6b17-491e-8999-3a70e0198ca1-utilities\") pod \"community-operators-xt2x6\" (UID: \"9264cbfe-6b17-491e-8999-3a70e0198ca1\") " pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.294855 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5d4\" (UniqueName: \"kubernetes.io/projected/9264cbfe-6b17-491e-8999-3a70e0198ca1-kube-api-access-zq5d4\") pod \"community-operators-xt2x6\" (UID: \"9264cbfe-6b17-491e-8999-3a70e0198ca1\") " pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.294886 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9264cbfe-6b17-491e-8999-3a70e0198ca1-catalog-content\") pod \"community-operators-xt2x6\" (UID: \"9264cbfe-6b17-491e-8999-3a70e0198ca1\") " pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.396202 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq5d4\" (UniqueName: \"kubernetes.io/projected/9264cbfe-6b17-491e-8999-3a70e0198ca1-kube-api-access-zq5d4\") pod \"community-operators-xt2x6\" (UID: \"9264cbfe-6b17-491e-8999-3a70e0198ca1\") " pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.396256 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9264cbfe-6b17-491e-8999-3a70e0198ca1-catalog-content\") pod \"community-operators-xt2x6\" (UID: \"9264cbfe-6b17-491e-8999-3a70e0198ca1\") " pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.396510 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9264cbfe-6b17-491e-8999-3a70e0198ca1-utilities\") pod \"community-operators-xt2x6\" (UID: \"9264cbfe-6b17-491e-8999-3a70e0198ca1\") " pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.396640 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9264cbfe-6b17-491e-8999-3a70e0198ca1-catalog-content\") pod \"community-operators-xt2x6\" (UID: \"9264cbfe-6b17-491e-8999-3a70e0198ca1\") " pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.396984 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9264cbfe-6b17-491e-8999-3a70e0198ca1-utilities\") pod \"community-operators-xt2x6\" (UID: \"9264cbfe-6b17-491e-8999-3a70e0198ca1\") " pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.405736 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.420312 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq5d4\" (UniqueName: \"kubernetes.io/projected/9264cbfe-6b17-491e-8999-3a70e0198ca1-kube-api-access-zq5d4\") pod \"community-operators-xt2x6\" (UID: \"9264cbfe-6b17-491e-8999-3a70e0198ca1\") " pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.598198 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e12f87a-681b-4268-a759-5a0043ce9b74-catalog-content\") pod \"2e12f87a-681b-4268-a759-5a0043ce9b74\" (UID: \"2e12f87a-681b-4268-a759-5a0043ce9b74\") " Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.598711 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e12f87a-681b-4268-a759-5a0043ce9b74-utilities\") pod \"2e12f87a-681b-4268-a759-5a0043ce9b74\" (UID: \"2e12f87a-681b-4268-a759-5a0043ce9b74\") " Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.598816 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll2cw\" (UniqueName: \"kubernetes.io/projected/2e12f87a-681b-4268-a759-5a0043ce9b74-kube-api-access-ll2cw\") pod \"2e12f87a-681b-4268-a759-5a0043ce9b74\" (UID: \"2e12f87a-681b-4268-a759-5a0043ce9b74\") " Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.599349 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e12f87a-681b-4268-a759-5a0043ce9b74-utilities" (OuterVolumeSpecName: "utilities") pod "2e12f87a-681b-4268-a759-5a0043ce9b74" (UID: "2e12f87a-681b-4268-a759-5a0043ce9b74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.599956 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e12f87a-681b-4268-a759-5a0043ce9b74-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.601548 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e12f87a-681b-4268-a759-5a0043ce9b74-kube-api-access-ll2cw" (OuterVolumeSpecName: "kube-api-access-ll2cw") pod "2e12f87a-681b-4268-a759-5a0043ce9b74" (UID: "2e12f87a-681b-4268-a759-5a0043ce9b74"). InnerVolumeSpecName "kube-api-access-ll2cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.616782 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.702125 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll2cw\" (UniqueName: \"kubernetes.io/projected/2e12f87a-681b-4268-a759-5a0043ce9b74-kube-api-access-ll2cw\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.721141 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e12f87a-681b-4268-a759-5a0043ce9b74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e12f87a-681b-4268-a759-5a0043ce9b74" (UID: "2e12f87a-681b-4268-a759-5a0043ce9b74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.806310 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e12f87a-681b-4268-a759-5a0043ce9b74-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:21 crc kubenswrapper[4732]: I0131 09:26:21.859473 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xt2x6"] Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.035854 4732 generic.go:334] "Generic (PLEG): container finished" podID="2e12f87a-681b-4268-a759-5a0043ce9b74" containerID="d895ab26fd561f11dab8cd4d8001e029f55ee41384747dfd0582712824be9183" exitCode=0 Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.035917 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-656dg" Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.035906 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-656dg" event={"ID":"2e12f87a-681b-4268-a759-5a0043ce9b74","Type":"ContainerDied","Data":"d895ab26fd561f11dab8cd4d8001e029f55ee41384747dfd0582712824be9183"} Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.036520 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-656dg" event={"ID":"2e12f87a-681b-4268-a759-5a0043ce9b74","Type":"ContainerDied","Data":"29b157830fbef40d781d40bed5e3826609c906a763e5b72b90cee5d6d8cea24c"} Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.036546 4732 scope.go:117] "RemoveContainer" containerID="d895ab26fd561f11dab8cd4d8001e029f55ee41384747dfd0582712824be9183" Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.041025 4732 generic.go:334] "Generic (PLEG): container finished" podID="9264cbfe-6b17-491e-8999-3a70e0198ca1" containerID="5b7a52f67c8b8f70cc049c374dda98d143759e342ba44ccaaf714b2ba48aa856" exitCode=0 Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.041063 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xt2x6" event={"ID":"9264cbfe-6b17-491e-8999-3a70e0198ca1","Type":"ContainerDied","Data":"5b7a52f67c8b8f70cc049c374dda98d143759e342ba44ccaaf714b2ba48aa856"} Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.041098 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xt2x6" event={"ID":"9264cbfe-6b17-491e-8999-3a70e0198ca1","Type":"ContainerStarted","Data":"3f6b4b05eaf571ca3a51b640a5f9c2f8c713991b9757e8306c7e848d7ac06420"} Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.101371 4732 scope.go:117] "RemoveContainer" containerID="7bc43bc96f778289060427cb473897dee11bab3838f8edfd6f142f8bdc222057" Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.104719 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-656dg"] Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.108203 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-656dg"] Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.129886 4732 scope.go:117] "RemoveContainer" containerID="c66b5574a75190dfa43ca1b2cdb9dfc8ae21408d31afedb9d3fc37979ae65b8e" Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.161847 4732 scope.go:117] "RemoveContainer" containerID="d895ab26fd561f11dab8cd4d8001e029f55ee41384747dfd0582712824be9183" Jan 31 09:26:22 crc kubenswrapper[4732]: E0131 09:26:22.162235 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d895ab26fd561f11dab8cd4d8001e029f55ee41384747dfd0582712824be9183\": container with ID starting with d895ab26fd561f11dab8cd4d8001e029f55ee41384747dfd0582712824be9183 not found: ID does not exist" containerID="d895ab26fd561f11dab8cd4d8001e029f55ee41384747dfd0582712824be9183" Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.162269 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d895ab26fd561f11dab8cd4d8001e029f55ee41384747dfd0582712824be9183"} err="failed to get container status \"d895ab26fd561f11dab8cd4d8001e029f55ee41384747dfd0582712824be9183\": rpc error: code = NotFound desc = could not find container \"d895ab26fd561f11dab8cd4d8001e029f55ee41384747dfd0582712824be9183\": container with ID starting with d895ab26fd561f11dab8cd4d8001e029f55ee41384747dfd0582712824be9183 not found: ID does not exist" Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.162294 4732 scope.go:117] "RemoveContainer" containerID="7bc43bc96f778289060427cb473897dee11bab3838f8edfd6f142f8bdc222057" Jan 31 09:26:22 crc kubenswrapper[4732]: E0131 09:26:22.162481 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc43bc96f778289060427cb473897dee11bab3838f8edfd6f142f8bdc222057\": container with ID starting with 7bc43bc96f778289060427cb473897dee11bab3838f8edfd6f142f8bdc222057 not found: ID does not exist" containerID="7bc43bc96f778289060427cb473897dee11bab3838f8edfd6f142f8bdc222057" Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.162509 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc43bc96f778289060427cb473897dee11bab3838f8edfd6f142f8bdc222057"} err="failed to get container status \"7bc43bc96f778289060427cb473897dee11bab3838f8edfd6f142f8bdc222057\": rpc error: code = NotFound desc = could not find container \"7bc43bc96f778289060427cb473897dee11bab3838f8edfd6f142f8bdc222057\": container with ID starting with 7bc43bc96f778289060427cb473897dee11bab3838f8edfd6f142f8bdc222057 not found: ID does not exist" Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.162525 4732 scope.go:117] "RemoveContainer" containerID="c66b5574a75190dfa43ca1b2cdb9dfc8ae21408d31afedb9d3fc37979ae65b8e" Jan 31 09:26:22 crc kubenswrapper[4732]: E0131 09:26:22.162826 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c66b5574a75190dfa43ca1b2cdb9dfc8ae21408d31afedb9d3fc37979ae65b8e\": container with ID starting with c66b5574a75190dfa43ca1b2cdb9dfc8ae21408d31afedb9d3fc37979ae65b8e not found: ID does not exist" containerID="c66b5574a75190dfa43ca1b2cdb9dfc8ae21408d31afedb9d3fc37979ae65b8e" Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.162853 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66b5574a75190dfa43ca1b2cdb9dfc8ae21408d31afedb9d3fc37979ae65b8e"} err="failed to get container status \"c66b5574a75190dfa43ca1b2cdb9dfc8ae21408d31afedb9d3fc37979ae65b8e\": rpc error: code = NotFound desc = could not find container \"c66b5574a75190dfa43ca1b2cdb9dfc8ae21408d31afedb9d3fc37979ae65b8e\": container with ID starting with c66b5574a75190dfa43ca1b2cdb9dfc8ae21408d31afedb9d3fc37979ae65b8e not found: ID does not exist" Jan 31 09:26:22 crc kubenswrapper[4732]: I0131 09:26:22.549823 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e12f87a-681b-4268-a759-5a0043ce9b74" path="/var/lib/kubelet/pods/2e12f87a-681b-4268-a759-5a0043ce9b74/volumes" Jan 31 09:26:23 crc kubenswrapper[4732]: I0131 09:26:23.051548 4732 generic.go:334] "Generic (PLEG): container finished" podID="9264cbfe-6b17-491e-8999-3a70e0198ca1" containerID="05fbbfd604512929fe0e74e29581f9d3ad75ec050effbbc8ae7081eb80ec46ca" exitCode=0 Jan 31 09:26:23 crc kubenswrapper[4732]: I0131 09:26:23.051932 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xt2x6" event={"ID":"9264cbfe-6b17-491e-8999-3a70e0198ca1","Type":"ContainerDied","Data":"05fbbfd604512929fe0e74e29581f9d3ad75ec050effbbc8ae7081eb80ec46ca"} Jan 31 09:26:24 crc kubenswrapper[4732]: I0131 09:26:24.059677 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xt2x6" event={"ID":"9264cbfe-6b17-491e-8999-3a70e0198ca1","Type":"ContainerStarted","Data":"f97d2ae7ea12909284eb1b47f90cd95044a086c0e55c39c0f06f67810e91494e"} Jan 31 09:26:24 crc kubenswrapper[4732]: I0131 09:26:24.078744 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xt2x6" podStartSLOduration=1.702036823 podStartE2EDuration="3.078722764s" podCreationTimestamp="2026-01-31 09:26:21 +0000 UTC" firstStartedPulling="2026-01-31 09:26:22.042654752 +0000 UTC m=+1520.348530956" lastFinishedPulling="2026-01-31 09:26:23.419340693 +0000 UTC m=+1521.725216897" observedRunningTime="2026-01-31 09:26:24.07600767 +0000 UTC m=+1522.381883874" watchObservedRunningTime="2026-01-31 09:26:24.078722764 +0000 UTC m=+1522.384598968" Jan 31 09:26:31 crc kubenswrapper[4732]: I0131 09:26:31.618048 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:31 crc kubenswrapper[4732]: I0131 09:26:31.619406 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:31 crc kubenswrapper[4732]: I0131 09:26:31.659975 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:32 crc kubenswrapper[4732]: I0131 09:26:32.134566 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:32 crc kubenswrapper[4732]: I0131 09:26:32.173830 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xt2x6"] Jan 31 09:26:32 crc kubenswrapper[4732]: I0131 09:26:32.545084 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:26:32 crc kubenswrapper[4732]: E0131 09:26:32.545313 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:26:34 crc kubenswrapper[4732]: I0131 09:26:34.112521 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xt2x6" podUID="9264cbfe-6b17-491e-8999-3a70e0198ca1" containerName="registry-server" containerID="cri-o://f97d2ae7ea12909284eb1b47f90cd95044a086c0e55c39c0f06f67810e91494e" gracePeriod=2 Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.069992 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.120203 4732 generic.go:334] "Generic (PLEG): container finished" podID="9264cbfe-6b17-491e-8999-3a70e0198ca1" containerID="f97d2ae7ea12909284eb1b47f90cd95044a086c0e55c39c0f06f67810e91494e" exitCode=0 Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.120247 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xt2x6" event={"ID":"9264cbfe-6b17-491e-8999-3a70e0198ca1","Type":"ContainerDied","Data":"f97d2ae7ea12909284eb1b47f90cd95044a086c0e55c39c0f06f67810e91494e"} Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.120276 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xt2x6" event={"ID":"9264cbfe-6b17-491e-8999-3a70e0198ca1","Type":"ContainerDied","Data":"3f6b4b05eaf571ca3a51b640a5f9c2f8c713991b9757e8306c7e848d7ac06420"} Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.120297 4732 scope.go:117] "RemoveContainer" containerID="f97d2ae7ea12909284eb1b47f90cd95044a086c0e55c39c0f06f67810e91494e" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.120424 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xt2x6" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.142234 4732 scope.go:117] "RemoveContainer" containerID="05fbbfd604512929fe0e74e29581f9d3ad75ec050effbbc8ae7081eb80ec46ca" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.173941 4732 scope.go:117] "RemoveContainer" containerID="5b7a52f67c8b8f70cc049c374dda98d143759e342ba44ccaaf714b2ba48aa856" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.190927 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9264cbfe-6b17-491e-8999-3a70e0198ca1-catalog-content\") pod \"9264cbfe-6b17-491e-8999-3a70e0198ca1\" (UID: \"9264cbfe-6b17-491e-8999-3a70e0198ca1\") " Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.191035 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq5d4\" (UniqueName: \"kubernetes.io/projected/9264cbfe-6b17-491e-8999-3a70e0198ca1-kube-api-access-zq5d4\") pod \"9264cbfe-6b17-491e-8999-3a70e0198ca1\" (UID: \"9264cbfe-6b17-491e-8999-3a70e0198ca1\") " Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.191064 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9264cbfe-6b17-491e-8999-3a70e0198ca1-utilities\") pod \"9264cbfe-6b17-491e-8999-3a70e0198ca1\" (UID: \"9264cbfe-6b17-491e-8999-3a70e0198ca1\") " Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.194202 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9264cbfe-6b17-491e-8999-3a70e0198ca1-utilities" (OuterVolumeSpecName: "utilities") pod "9264cbfe-6b17-491e-8999-3a70e0198ca1" (UID: "9264cbfe-6b17-491e-8999-3a70e0198ca1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.196227 4732 scope.go:117] "RemoveContainer" containerID="f97d2ae7ea12909284eb1b47f90cd95044a086c0e55c39c0f06f67810e91494e" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.197641 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9264cbfe-6b17-491e-8999-3a70e0198ca1-kube-api-access-zq5d4" (OuterVolumeSpecName: "kube-api-access-zq5d4") pod "9264cbfe-6b17-491e-8999-3a70e0198ca1" (UID: "9264cbfe-6b17-491e-8999-3a70e0198ca1"). InnerVolumeSpecName "kube-api-access-zq5d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:26:35 crc kubenswrapper[4732]: E0131 09:26:35.200903 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f97d2ae7ea12909284eb1b47f90cd95044a086c0e55c39c0f06f67810e91494e\": container with ID starting with f97d2ae7ea12909284eb1b47f90cd95044a086c0e55c39c0f06f67810e91494e not found: ID does not exist" containerID="f97d2ae7ea12909284eb1b47f90cd95044a086c0e55c39c0f06f67810e91494e" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.200949 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f97d2ae7ea12909284eb1b47f90cd95044a086c0e55c39c0f06f67810e91494e"} err="failed to get container status \"f97d2ae7ea12909284eb1b47f90cd95044a086c0e55c39c0f06f67810e91494e\": rpc error: code = NotFound desc = could not find container \"f97d2ae7ea12909284eb1b47f90cd95044a086c0e55c39c0f06f67810e91494e\": container with ID starting with f97d2ae7ea12909284eb1b47f90cd95044a086c0e55c39c0f06f67810e91494e not found: ID does not exist" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.200973 4732 scope.go:117] "RemoveContainer" containerID="05fbbfd604512929fe0e74e29581f9d3ad75ec050effbbc8ae7081eb80ec46ca" Jan 31 09:26:35 crc kubenswrapper[4732]: E0131 09:26:35.201282 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05fbbfd604512929fe0e74e29581f9d3ad75ec050effbbc8ae7081eb80ec46ca\": container with ID starting with 05fbbfd604512929fe0e74e29581f9d3ad75ec050effbbc8ae7081eb80ec46ca not found: ID does not exist" containerID="05fbbfd604512929fe0e74e29581f9d3ad75ec050effbbc8ae7081eb80ec46ca" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.201300 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05fbbfd604512929fe0e74e29581f9d3ad75ec050effbbc8ae7081eb80ec46ca"} err="failed to get container status \"05fbbfd604512929fe0e74e29581f9d3ad75ec050effbbc8ae7081eb80ec46ca\": rpc error: code = NotFound desc = could not find container \"05fbbfd604512929fe0e74e29581f9d3ad75ec050effbbc8ae7081eb80ec46ca\": container with ID starting with 05fbbfd604512929fe0e74e29581f9d3ad75ec050effbbc8ae7081eb80ec46ca not found: ID does not exist" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.201312 4732 scope.go:117] "RemoveContainer" containerID="5b7a52f67c8b8f70cc049c374dda98d143759e342ba44ccaaf714b2ba48aa856" Jan 31 09:26:35 crc kubenswrapper[4732]: E0131 09:26:35.201515 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b7a52f67c8b8f70cc049c374dda98d143759e342ba44ccaaf714b2ba48aa856\": container with ID starting with 5b7a52f67c8b8f70cc049c374dda98d143759e342ba44ccaaf714b2ba48aa856 not found: ID does not exist" containerID="5b7a52f67c8b8f70cc049c374dda98d143759e342ba44ccaaf714b2ba48aa856" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.201531 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b7a52f67c8b8f70cc049c374dda98d143759e342ba44ccaaf714b2ba48aa856"} err="failed to get container status \"5b7a52f67c8b8f70cc049c374dda98d143759e342ba44ccaaf714b2ba48aa856\": rpc error: code = NotFound desc = could not find container \"5b7a52f67c8b8f70cc049c374dda98d143759e342ba44ccaaf714b2ba48aa856\": container with ID starting with 5b7a52f67c8b8f70cc049c374dda98d143759e342ba44ccaaf714b2ba48aa856 not found: ID does not exist" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.237899 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9264cbfe-6b17-491e-8999-3a70e0198ca1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9264cbfe-6b17-491e-8999-3a70e0198ca1" (UID: "9264cbfe-6b17-491e-8999-3a70e0198ca1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.291909 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq5d4\" (UniqueName: \"kubernetes.io/projected/9264cbfe-6b17-491e-8999-3a70e0198ca1-kube-api-access-zq5d4\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.291938 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9264cbfe-6b17-491e-8999-3a70e0198ca1-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.291947 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9264cbfe-6b17-491e-8999-3a70e0198ca1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.459717 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xt2x6"] Jan 31 09:26:35 crc kubenswrapper[4732]: I0131 09:26:35.464307 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xt2x6"] Jan 31 09:26:36 crc kubenswrapper[4732]: I0131 09:26:36.552605 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9264cbfe-6b17-491e-8999-3a70e0198ca1" path="/var/lib/kubelet/pods/9264cbfe-6b17-491e-8999-3a70e0198ca1/volumes" Jan 31 09:26:47 crc kubenswrapper[4732]: I0131 09:26:47.543837 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:26:47 crc kubenswrapper[4732]: E0131 09:26:47.544916 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:27:02 crc kubenswrapper[4732]: I0131 09:27:02.326555 4732 generic.go:334] "Generic (PLEG): container finished" podID="efcd56a7-a326-43ac-8d3e-c1a2fc2a464f" containerID="3c1329e6bf2dfb982396045a78d5b23bda418913a472d6b57856329382998b59" exitCode=0 Jan 31 09:27:02 crc kubenswrapper[4732]: I0131 09:27:02.326631 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4v4x4/must-gather-8vtn6" event={"ID":"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f","Type":"ContainerDied","Data":"3c1329e6bf2dfb982396045a78d5b23bda418913a472d6b57856329382998b59"} Jan 31 09:27:02 crc kubenswrapper[4732]: I0131 09:27:02.327927 4732 scope.go:117] "RemoveContainer" containerID="3c1329e6bf2dfb982396045a78d5b23bda418913a472d6b57856329382998b59" Jan 31 09:27:02 crc kubenswrapper[4732]: I0131 09:27:02.511533 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4v4x4_must-gather-8vtn6_efcd56a7-a326-43ac-8d3e-c1a2fc2a464f/gather/0.log" Jan 31 09:27:02 crc kubenswrapper[4732]: I0131 09:27:02.553620 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:27:02 crc kubenswrapper[4732]: E0131 09:27:02.554415 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.478534 4732 scope.go:117] "RemoveContainer" containerID="5e80e6297ed4cb6407583fc9a3cefb0d406579b8e762ae1e494844cf4e75d919" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.512192 4732 scope.go:117] "RemoveContainer" containerID="6e7852a660c0fcf3225ca272b8e2af7fc11735fb24d2d14460cde6438a7beb2d" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.526405 4732 scope.go:117] "RemoveContainer" containerID="1afc72dc67226efea88d83cce600c96a720e1c283b758b093373dafe9a1d70f3" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.562296 4732 scope.go:117] "RemoveContainer" containerID="3a8ebcfcb039bb39617fcee0e053ba202cf3bccf0891372d90f2647abbe5c1ea" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.582889 4732 scope.go:117] "RemoveContainer" containerID="fd00da0aa47cdd0410c40f7add08c30b0950cfafc21a202b2619d006f368871b" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.599281 4732 scope.go:117] "RemoveContainer" containerID="655a856811f2b949bc2c94495ba00a853818330f4ec09b9aeffd1751288aff79" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.613755 4732 scope.go:117] "RemoveContainer" containerID="039961fef86b21e927d5ebdf1e2eb67c58775d812f1269411f3f0411c895a43f" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.627009 4732 scope.go:117] "RemoveContainer" containerID="8f37816afb654d1e776d0d4eec1d440d5af5c0af6bb7163bf1822d00c7129008" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.639521 4732 scope.go:117] "RemoveContainer" containerID="78f74f08eb824105edcd5f1f9501b3fa915219d39b763fb391f67d7c1c054292" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.668743 4732 scope.go:117] "RemoveContainer" containerID="186ff48385c2286a698108d59672234436dfc7dbb5bac5e777070affa544b217" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.686882 4732 scope.go:117] "RemoveContainer" containerID="f7bb7853880d63c47b634d75defb148b4ed41cf77de6d39f02020df2ff5b03d3" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.711513 4732 scope.go:117] "RemoveContainer" containerID="61d667d399369be98c22e8ccc9f3b4cb18a2a2b489b34f036b2fc88c0e71e429" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.731451 4732 scope.go:117] "RemoveContainer" containerID="29c732c7d142dfca9f679c8ac3af3f06cbb08a8c2b215575b2a3b5e1d907c9bb" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.757555 4732 scope.go:117] "RemoveContainer" containerID="ada6f8ad69317a784118495c96beb36d701ce7b36209a86b9a936d7cc110e3c2" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.775820 4732 scope.go:117] "RemoveContainer" containerID="8d252efa39cfce425e89f4670019c9177c3b0c93daee21298b31748b98d341f0" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.798529 4732 scope.go:117] "RemoveContainer" containerID="d31d074c83b389c8db51d7fd48db465e4e9e513ed6084f0c730eaa69444cb6c5" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.826441 4732 scope.go:117] "RemoveContainer" containerID="b50b39b84c6087e4699184efc06a407c325204a64a826bce844fec4c5164858e" Jan 31 09:27:05 crc kubenswrapper[4732]: I0131 09:27:05.849210 4732 scope.go:117] "RemoveContainer" containerID="420022f2e68e290b989b3165de84484bcc24a80cd43dfb94fa6e26433aff9a55" Jan 31 09:27:09 crc kubenswrapper[4732]: I0131 09:27:09.671371 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4v4x4/must-gather-8vtn6"] Jan 31 09:27:09 crc kubenswrapper[4732]: I0131 09:27:09.672616 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4v4x4/must-gather-8vtn6" podUID="efcd56a7-a326-43ac-8d3e-c1a2fc2a464f" containerName="copy" containerID="cri-o://f88858a19d913954a48096ea1df3d5c79e72769bf4ad9e1829233f60113d0854" gracePeriod=2 Jan 31 09:27:09 crc kubenswrapper[4732]: I0131 09:27:09.675778 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4v4x4/must-gather-8vtn6"] Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.041915 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4v4x4_must-gather-8vtn6_efcd56a7-a326-43ac-8d3e-c1a2fc2a464f/copy/0.log" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.042730 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4v4x4/must-gather-8vtn6" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.208331 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/efcd56a7-a326-43ac-8d3e-c1a2fc2a464f-must-gather-output\") pod \"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f\" (UID: \"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f\") " Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.208502 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkkgd\" (UniqueName: \"kubernetes.io/projected/efcd56a7-a326-43ac-8d3e-c1a2fc2a464f-kube-api-access-bkkgd\") pod \"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f\" (UID: \"efcd56a7-a326-43ac-8d3e-c1a2fc2a464f\") " Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.223597 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efcd56a7-a326-43ac-8d3e-c1a2fc2a464f-kube-api-access-bkkgd" (OuterVolumeSpecName: "kube-api-access-bkkgd") pod "efcd56a7-a326-43ac-8d3e-c1a2fc2a464f" (UID: "efcd56a7-a326-43ac-8d3e-c1a2fc2a464f"). InnerVolumeSpecName "kube-api-access-bkkgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.290100 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efcd56a7-a326-43ac-8d3e-c1a2fc2a464f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "efcd56a7-a326-43ac-8d3e-c1a2fc2a464f" (UID: "efcd56a7-a326-43ac-8d3e-c1a2fc2a464f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.309403 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkkgd\" (UniqueName: \"kubernetes.io/projected/efcd56a7-a326-43ac-8d3e-c1a2fc2a464f-kube-api-access-bkkgd\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.309757 4732 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/efcd56a7-a326-43ac-8d3e-c1a2fc2a464f-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.394278 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4v4x4_must-gather-8vtn6_efcd56a7-a326-43ac-8d3e-c1a2fc2a464f/copy/0.log" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.395086 4732 generic.go:334] "Generic (PLEG): container finished" podID="efcd56a7-a326-43ac-8d3e-c1a2fc2a464f" containerID="f88858a19d913954a48096ea1df3d5c79e72769bf4ad9e1829233f60113d0854" exitCode=143 Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.395142 4732 scope.go:117] "RemoveContainer" containerID="f88858a19d913954a48096ea1df3d5c79e72769bf4ad9e1829233f60113d0854" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.395145 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4v4x4/must-gather-8vtn6" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.415901 4732 scope.go:117] "RemoveContainer" containerID="3c1329e6bf2dfb982396045a78d5b23bda418913a472d6b57856329382998b59" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.454795 4732 scope.go:117] "RemoveContainer" containerID="f88858a19d913954a48096ea1df3d5c79e72769bf4ad9e1829233f60113d0854" Jan 31 09:27:10 crc kubenswrapper[4732]: E0131 09:27:10.455333 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f88858a19d913954a48096ea1df3d5c79e72769bf4ad9e1829233f60113d0854\": container with ID starting with f88858a19d913954a48096ea1df3d5c79e72769bf4ad9e1829233f60113d0854 not found: ID does not exist" containerID="f88858a19d913954a48096ea1df3d5c79e72769bf4ad9e1829233f60113d0854" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.455381 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f88858a19d913954a48096ea1df3d5c79e72769bf4ad9e1829233f60113d0854"} err="failed to get container status \"f88858a19d913954a48096ea1df3d5c79e72769bf4ad9e1829233f60113d0854\": rpc error: code = NotFound desc = could not find container \"f88858a19d913954a48096ea1df3d5c79e72769bf4ad9e1829233f60113d0854\": container with ID starting with f88858a19d913954a48096ea1df3d5c79e72769bf4ad9e1829233f60113d0854 not found: ID does not exist" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.455409 4732 scope.go:117] "RemoveContainer" containerID="3c1329e6bf2dfb982396045a78d5b23bda418913a472d6b57856329382998b59" Jan 31 09:27:10 crc kubenswrapper[4732]: E0131 09:27:10.455684 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c1329e6bf2dfb982396045a78d5b23bda418913a472d6b57856329382998b59\": container with ID starting with 3c1329e6bf2dfb982396045a78d5b23bda418913a472d6b57856329382998b59 not found: ID does not exist" containerID="3c1329e6bf2dfb982396045a78d5b23bda418913a472d6b57856329382998b59" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.455706 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c1329e6bf2dfb982396045a78d5b23bda418913a472d6b57856329382998b59"} err="failed to get container status \"3c1329e6bf2dfb982396045a78d5b23bda418913a472d6b57856329382998b59\": rpc error: code = NotFound desc = could not find container \"3c1329e6bf2dfb982396045a78d5b23bda418913a472d6b57856329382998b59\": container with ID starting with 3c1329e6bf2dfb982396045a78d5b23bda418913a472d6b57856329382998b59 not found: ID does not exist" Jan 31 09:27:10 crc kubenswrapper[4732]: I0131 09:27:10.555636 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efcd56a7-a326-43ac-8d3e-c1a2fc2a464f" path="/var/lib/kubelet/pods/efcd56a7-a326-43ac-8d3e-c1a2fc2a464f/volumes" Jan 31 09:27:17 crc kubenswrapper[4732]: I0131 09:27:17.543125 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:27:17 crc kubenswrapper[4732]: E0131 09:27:17.543726 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:27:31 crc kubenswrapper[4732]: I0131 09:27:31.542580 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:27:31 crc kubenswrapper[4732]: E0131 09:27:31.543384 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:27:44 crc kubenswrapper[4732]: I0131 09:27:44.543825 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:27:44 crc kubenswrapper[4732]: E0131 09:27:44.545139 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:27:55 crc kubenswrapper[4732]: I0131 09:27:55.543217 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:27:55 crc kubenswrapper[4732]: E0131 09:27:55.543985 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:28:05 crc kubenswrapper[4732]: I0131 09:28:05.942727 4732 scope.go:117] "RemoveContainer" containerID="85b861719f4e2c096ba302360733974fc49e5be1c8a6dc54dda1f149625db608" Jan 31 09:28:09 crc kubenswrapper[4732]: I0131 09:28:09.542733 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:28:09 crc kubenswrapper[4732]: E0131 09:28:09.543445 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:28:20 crc kubenswrapper[4732]: I0131 09:28:20.543007 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:28:20 crc kubenswrapper[4732]: E0131 09:28:20.544132 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:28:31 crc kubenswrapper[4732]: I0131 09:28:31.543225 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:28:31 crc kubenswrapper[4732]: E0131 09:28:31.544324 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:28:43 crc kubenswrapper[4732]: I0131 09:28:43.542409 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:28:43 crc kubenswrapper[4732]: E0131 09:28:43.543449 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:28:55 crc kubenswrapper[4732]: I0131 09:28:55.542260 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:28:55 crc kubenswrapper[4732]: E0131 09:28:55.542969 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:29:06 crc kubenswrapper[4732]: I0131 09:29:05.999791 4732 scope.go:117] "RemoveContainer" containerID="e9a704782d501296be317dbceec99e3b7ae21d704b38e927a87174fe7c4bd3f1" Jan 31 09:29:08 crc kubenswrapper[4732]: I0131 09:29:08.543535 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:29:08 crc kubenswrapper[4732]: E0131 09:29:08.544000 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:29:23 crc kubenswrapper[4732]: I0131 09:29:23.542305 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:29:23 crc kubenswrapper[4732]: E0131 09:29:23.543328 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:29:35 crc kubenswrapper[4732]: I0131 09:29:35.543384 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:29:35 crc kubenswrapper[4732]: E0131 09:29:35.544418 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.517557 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qcgvm/must-gather-dmxpx"] Jan 31 09:29:46 crc kubenswrapper[4732]: E0131 09:29:46.518493 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e12f87a-681b-4268-a759-5a0043ce9b74" containerName="extract-utilities" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.518507 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e12f87a-681b-4268-a759-5a0043ce9b74" containerName="extract-utilities" Jan 31 09:29:46 crc kubenswrapper[4732]: E0131 09:29:46.518521 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e12f87a-681b-4268-a759-5a0043ce9b74" containerName="extract-content" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.518530 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e12f87a-681b-4268-a759-5a0043ce9b74" containerName="extract-content" Jan 31 09:29:46 crc kubenswrapper[4732]: E0131 09:29:46.518542 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efcd56a7-a326-43ac-8d3e-c1a2fc2a464f" containerName="gather" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.518550 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="efcd56a7-a326-43ac-8d3e-c1a2fc2a464f" containerName="gather" Jan 31 09:29:46 crc kubenswrapper[4732]: E0131 09:29:46.518561 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e12f87a-681b-4268-a759-5a0043ce9b74" containerName="registry-server" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.518568 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e12f87a-681b-4268-a759-5a0043ce9b74" containerName="registry-server" Jan 31 09:29:46 crc kubenswrapper[4732]: E0131 09:29:46.518585 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9264cbfe-6b17-491e-8999-3a70e0198ca1" containerName="extract-utilities" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.518592 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="9264cbfe-6b17-491e-8999-3a70e0198ca1" containerName="extract-utilities" Jan 31 09:29:46 crc kubenswrapper[4732]: E0131 09:29:46.518612 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9264cbfe-6b17-491e-8999-3a70e0198ca1" containerName="extract-content" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.518620 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="9264cbfe-6b17-491e-8999-3a70e0198ca1" containerName="extract-content" Jan 31 09:29:46 crc kubenswrapper[4732]: E0131 09:29:46.518635 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9264cbfe-6b17-491e-8999-3a70e0198ca1" containerName="registry-server" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.518643 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="9264cbfe-6b17-491e-8999-3a70e0198ca1" containerName="registry-server" Jan 31 09:29:46 crc kubenswrapper[4732]: E0131 09:29:46.518654 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efcd56a7-a326-43ac-8d3e-c1a2fc2a464f" containerName="copy" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.518686 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="efcd56a7-a326-43ac-8d3e-c1a2fc2a464f" containerName="copy" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.518805 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e12f87a-681b-4268-a759-5a0043ce9b74" containerName="registry-server" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.518822 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="efcd56a7-a326-43ac-8d3e-c1a2fc2a464f" containerName="gather" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.518840 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="efcd56a7-a326-43ac-8d3e-c1a2fc2a464f" containerName="copy" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.518851 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="9264cbfe-6b17-491e-8999-3a70e0198ca1" containerName="registry-server" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.519603 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcgvm/must-gather-dmxpx" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.523196 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qcgvm"/"kube-root-ca.crt" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.523428 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qcgvm"/"openshift-service-ca.crt" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.523575 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qcgvm"/"default-dockercfg-28lch" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.543386 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qcgvm/must-gather-dmxpx"] Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.614287 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b48d1b94-13fa-4c78-8391-6c54f00c4049-must-gather-output\") pod \"must-gather-dmxpx\" (UID: \"b48d1b94-13fa-4c78-8391-6c54f00c4049\") " pod="openshift-must-gather-qcgvm/must-gather-dmxpx" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.614347 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnsls\" (UniqueName: \"kubernetes.io/projected/b48d1b94-13fa-4c78-8391-6c54f00c4049-kube-api-access-qnsls\") pod \"must-gather-dmxpx\" (UID: \"b48d1b94-13fa-4c78-8391-6c54f00c4049\") " pod="openshift-must-gather-qcgvm/must-gather-dmxpx" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.715052 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnsls\" (UniqueName: \"kubernetes.io/projected/b48d1b94-13fa-4c78-8391-6c54f00c4049-kube-api-access-qnsls\") pod \"must-gather-dmxpx\" (UID: \"b48d1b94-13fa-4c78-8391-6c54f00c4049\") " pod="openshift-must-gather-qcgvm/must-gather-dmxpx" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.715164 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b48d1b94-13fa-4c78-8391-6c54f00c4049-must-gather-output\") pod \"must-gather-dmxpx\" (UID: \"b48d1b94-13fa-4c78-8391-6c54f00c4049\") " pod="openshift-must-gather-qcgvm/must-gather-dmxpx" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.715634 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b48d1b94-13fa-4c78-8391-6c54f00c4049-must-gather-output\") pod \"must-gather-dmxpx\" (UID: \"b48d1b94-13fa-4c78-8391-6c54f00c4049\") " pod="openshift-must-gather-qcgvm/must-gather-dmxpx" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.737300 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnsls\" (UniqueName: \"kubernetes.io/projected/b48d1b94-13fa-4c78-8391-6c54f00c4049-kube-api-access-qnsls\") pod \"must-gather-dmxpx\" (UID: \"b48d1b94-13fa-4c78-8391-6c54f00c4049\") " pod="openshift-must-gather-qcgvm/must-gather-dmxpx" Jan 31 09:29:46 crc kubenswrapper[4732]: I0131 09:29:46.844830 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcgvm/must-gather-dmxpx" Jan 31 09:29:47 crc kubenswrapper[4732]: I0131 09:29:47.057242 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qcgvm/must-gather-dmxpx"] Jan 31 09:29:47 crc kubenswrapper[4732]: I0131 09:29:47.413549 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcgvm/must-gather-dmxpx" event={"ID":"b48d1b94-13fa-4c78-8391-6c54f00c4049","Type":"ContainerStarted","Data":"8bc8af864159428007839c4433582dfc653e8b215dbb29387accda4214427352"} Jan 31 09:29:48 crc kubenswrapper[4732]: I0131 09:29:48.421144 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcgvm/must-gather-dmxpx" event={"ID":"b48d1b94-13fa-4c78-8391-6c54f00c4049","Type":"ContainerStarted","Data":"4a95f2552040e23cb82237e42fc7d1dd8d4b744cffad543d8632a98ef3b52dce"} Jan 31 09:29:48 crc kubenswrapper[4732]: I0131 09:29:48.421465 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcgvm/must-gather-dmxpx" event={"ID":"b48d1b94-13fa-4c78-8391-6c54f00c4049","Type":"ContainerStarted","Data":"b6141ba3580c93eb3c741c0bc8fb44dce9869080708fa256445df445377a390f"} Jan 31 09:29:48 crc kubenswrapper[4732]: I0131 09:29:48.442978 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qcgvm/must-gather-dmxpx" podStartSLOduration=2.442960176 podStartE2EDuration="2.442960176s" podCreationTimestamp="2026-01-31 09:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 09:29:48.43951127 +0000 UTC m=+1726.745387474" watchObservedRunningTime="2026-01-31 09:29:48.442960176 +0000 UTC m=+1726.748836380" Jan 31 09:29:50 crc kubenswrapper[4732]: I0131 09:29:50.545939 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:29:50 crc kubenswrapper[4732]: E0131 09:29:50.546107 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.134735 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z"] Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.136072 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.138119 4732 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.138387 4732 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.141040 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z"] Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.204457 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/892132d3-fb6d-4b47-b5ce-bfc23f479073-config-volume\") pod \"collect-profiles-29497530-s5s7z\" (UID: \"892132d3-fb6d-4b47-b5ce-bfc23f479073\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.204535 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr78j\" (UniqueName: \"kubernetes.io/projected/892132d3-fb6d-4b47-b5ce-bfc23f479073-kube-api-access-sr78j\") pod \"collect-profiles-29497530-s5s7z\" (UID: \"892132d3-fb6d-4b47-b5ce-bfc23f479073\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.204652 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/892132d3-fb6d-4b47-b5ce-bfc23f479073-secret-volume\") pod \"collect-profiles-29497530-s5s7z\" (UID: \"892132d3-fb6d-4b47-b5ce-bfc23f479073\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.305887 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/892132d3-fb6d-4b47-b5ce-bfc23f479073-config-volume\") pod \"collect-profiles-29497530-s5s7z\" (UID: \"892132d3-fb6d-4b47-b5ce-bfc23f479073\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.305962 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr78j\" (UniqueName: \"kubernetes.io/projected/892132d3-fb6d-4b47-b5ce-bfc23f479073-kube-api-access-sr78j\") pod \"collect-profiles-29497530-s5s7z\" (UID: \"892132d3-fb6d-4b47-b5ce-bfc23f479073\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.306011 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/892132d3-fb6d-4b47-b5ce-bfc23f479073-secret-volume\") pod \"collect-profiles-29497530-s5s7z\" (UID: \"892132d3-fb6d-4b47-b5ce-bfc23f479073\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.306805 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/892132d3-fb6d-4b47-b5ce-bfc23f479073-config-volume\") pod \"collect-profiles-29497530-s5s7z\" (UID: \"892132d3-fb6d-4b47-b5ce-bfc23f479073\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.313229 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/892132d3-fb6d-4b47-b5ce-bfc23f479073-secret-volume\") pod \"collect-profiles-29497530-s5s7z\" (UID: \"892132d3-fb6d-4b47-b5ce-bfc23f479073\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.325258 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr78j\" (UniqueName: \"kubernetes.io/projected/892132d3-fb6d-4b47-b5ce-bfc23f479073-kube-api-access-sr78j\") pod \"collect-profiles-29497530-s5s7z\" (UID: \"892132d3-fb6d-4b47-b5ce-bfc23f479073\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.454546 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" Jan 31 09:30:00 crc kubenswrapper[4732]: I0131 09:30:00.634321 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z"] Jan 31 09:30:00 crc kubenswrapper[4732]: W0131 09:30:00.638888 4732 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod892132d3_fb6d_4b47_b5ce_bfc23f479073.slice/crio-361f692802a3d42aa6b410360102991144ad2a93a90ae30a7682dcca1952457c WatchSource:0}: Error finding container 361f692802a3d42aa6b410360102991144ad2a93a90ae30a7682dcca1952457c: Status 404 returned error can't find the container with id 361f692802a3d42aa6b410360102991144ad2a93a90ae30a7682dcca1952457c Jan 31 09:30:01 crc kubenswrapper[4732]: I0131 09:30:01.504979 4732 generic.go:334] "Generic (PLEG): container finished" podID="892132d3-fb6d-4b47-b5ce-bfc23f479073" containerID="69e5b29dc7561c0d73b6331e7d93ff19db8795725f1bad225b50b3477a51841a" exitCode=0 Jan 31 09:30:01 crc kubenswrapper[4732]: I0131 09:30:01.505054 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" event={"ID":"892132d3-fb6d-4b47-b5ce-bfc23f479073","Type":"ContainerDied","Data":"69e5b29dc7561c0d73b6331e7d93ff19db8795725f1bad225b50b3477a51841a"} Jan 31 09:30:01 crc kubenswrapper[4732]: I0131 09:30:01.505328 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" event={"ID":"892132d3-fb6d-4b47-b5ce-bfc23f479073","Type":"ContainerStarted","Data":"361f692802a3d42aa6b410360102991144ad2a93a90ae30a7682dcca1952457c"} Jan 31 09:30:02 crc kubenswrapper[4732]: I0131 09:30:02.760157 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" Jan 31 09:30:02 crc kubenswrapper[4732]: I0131 09:30:02.934800 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr78j\" (UniqueName: \"kubernetes.io/projected/892132d3-fb6d-4b47-b5ce-bfc23f479073-kube-api-access-sr78j\") pod \"892132d3-fb6d-4b47-b5ce-bfc23f479073\" (UID: \"892132d3-fb6d-4b47-b5ce-bfc23f479073\") " Jan 31 09:30:02 crc kubenswrapper[4732]: I0131 09:30:02.935123 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/892132d3-fb6d-4b47-b5ce-bfc23f479073-config-volume\") pod \"892132d3-fb6d-4b47-b5ce-bfc23f479073\" (UID: \"892132d3-fb6d-4b47-b5ce-bfc23f479073\") " Jan 31 09:30:02 crc kubenswrapper[4732]: I0131 09:30:02.935165 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/892132d3-fb6d-4b47-b5ce-bfc23f479073-secret-volume\") pod \"892132d3-fb6d-4b47-b5ce-bfc23f479073\" (UID: \"892132d3-fb6d-4b47-b5ce-bfc23f479073\") " Jan 31 09:30:02 crc kubenswrapper[4732]: I0131 09:30:02.935761 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/892132d3-fb6d-4b47-b5ce-bfc23f479073-config-volume" (OuterVolumeSpecName: "config-volume") pod "892132d3-fb6d-4b47-b5ce-bfc23f479073" (UID: "892132d3-fb6d-4b47-b5ce-bfc23f479073"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 09:30:02 crc kubenswrapper[4732]: I0131 09:30:02.939685 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892132d3-fb6d-4b47-b5ce-bfc23f479073-kube-api-access-sr78j" (OuterVolumeSpecName: "kube-api-access-sr78j") pod "892132d3-fb6d-4b47-b5ce-bfc23f479073" (UID: "892132d3-fb6d-4b47-b5ce-bfc23f479073"). InnerVolumeSpecName "kube-api-access-sr78j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:30:02 crc kubenswrapper[4732]: I0131 09:30:02.939919 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892132d3-fb6d-4b47-b5ce-bfc23f479073-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "892132d3-fb6d-4b47-b5ce-bfc23f479073" (UID: "892132d3-fb6d-4b47-b5ce-bfc23f479073"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 09:30:03 crc kubenswrapper[4732]: I0131 09:30:03.036809 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr78j\" (UniqueName: \"kubernetes.io/projected/892132d3-fb6d-4b47-b5ce-bfc23f479073-kube-api-access-sr78j\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:03 crc kubenswrapper[4732]: I0131 09:30:03.036847 4732 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/892132d3-fb6d-4b47-b5ce-bfc23f479073-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:03 crc kubenswrapper[4732]: I0131 09:30:03.036858 4732 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/892132d3-fb6d-4b47-b5ce-bfc23f479073-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 09:30:03 crc kubenswrapper[4732]: I0131 09:30:03.518621 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" event={"ID":"892132d3-fb6d-4b47-b5ce-bfc23f479073","Type":"ContainerDied","Data":"361f692802a3d42aa6b410360102991144ad2a93a90ae30a7682dcca1952457c"} Jan 31 09:30:03 crc kubenswrapper[4732]: I0131 09:30:03.518698 4732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="361f692802a3d42aa6b410360102991144ad2a93a90ae30a7682dcca1952457c" Jan 31 09:30:03 crc kubenswrapper[4732]: I0131 09:30:03.518766 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497530-s5s7z" Jan 31 09:30:04 crc kubenswrapper[4732]: I0131 09:30:04.543024 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:30:04 crc kubenswrapper[4732]: E0131 09:30:04.543235 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:30:06 crc kubenswrapper[4732]: I0131 09:30:06.041700 4732 scope.go:117] "RemoveContainer" containerID="def72532b530c78803f0cccd8c5a2d65a616e430cf1d92043ad3623133222585" Jan 31 09:30:15 crc kubenswrapper[4732]: I0131 09:30:15.543489 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:30:15 crc kubenswrapper[4732]: E0131 09:30:15.544164 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:30:26 crc kubenswrapper[4732]: I0131 09:30:26.543364 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:30:26 crc kubenswrapper[4732]: E0131 09:30:26.544073 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:30:35 crc kubenswrapper[4732]: I0131 09:30:35.373361 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-v69mc_498d64fc-0d0f-43c6-aaae-bd3c5f0d7873/control-plane-machine-set-operator/0.log" Jan 31 09:30:35 crc kubenswrapper[4732]: I0131 09:30:35.557518 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-54nxd_e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a/kube-rbac-proxy/0.log" Jan 31 09:30:35 crc kubenswrapper[4732]: I0131 09:30:35.593979 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-54nxd_e9ce5ab1-4da9-47a4-83b9-3c7d8af6d55a/machine-api-operator/0.log" Jan 31 09:30:41 crc kubenswrapper[4732]: I0131 09:30:41.543118 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:30:41 crc kubenswrapper[4732]: E0131 09:30:41.543953 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:30:54 crc kubenswrapper[4732]: I0131 09:30:54.543330 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:30:54 crc kubenswrapper[4732]: E0131 09:30:54.544020 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.077582 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-jq8g8_53b5272f-ac5c-4616-a427-28fc830d7392/kube-rbac-proxy/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.102237 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-jq8g8_53b5272f-ac5c-4616-a427-28fc830d7392/controller/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.258619 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-frr-files/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.451800 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-frr-files/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.452018 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-metrics/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.473312 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-reloader/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.474564 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-reloader/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.653207 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-frr-files/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.674533 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-metrics/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.677537 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-metrics/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.721297 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-reloader/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.910785 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-reloader/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.910881 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-frr-files/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.910986 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/cp-metrics/0.log" Jan 31 09:31:03 crc kubenswrapper[4732]: I0131 09:31:03.920194 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/controller/0.log" Jan 31 09:31:04 crc kubenswrapper[4732]: I0131 09:31:04.088535 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/frr-metrics/0.log" Jan 31 09:31:04 crc kubenswrapper[4732]: I0131 09:31:04.104261 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/kube-rbac-proxy/0.log" Jan 31 09:31:04 crc kubenswrapper[4732]: I0131 09:31:04.114895 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/kube-rbac-proxy-frr/0.log" Jan 31 09:31:04 crc kubenswrapper[4732]: I0131 09:31:04.297096 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/reloader/0.log" Jan 31 09:31:04 crc kubenswrapper[4732]: I0131 09:31:04.322112 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-5l2kt_4b09b4ac-95c1-4c31-99a0-12b38c3412ae/frr-k8s-webhook-server/0.log" Jan 31 09:31:04 crc kubenswrapper[4732]: I0131 09:31:04.569335 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6xvqw_66e27417-1fb4-4ca9-b104-d3d335370f0d/frr/0.log" Jan 31 09:31:04 crc kubenswrapper[4732]: I0131 09:31:04.642764 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6d8dc66c8b-8p2mn_8ca218dd-0d42-45c8-b4e4-ca638781c915/manager/0.log" Jan 31 09:31:04 crc kubenswrapper[4732]: I0131 09:31:04.773200 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-c98699f55-6bzdf_62f950f6-2a18-4ca6-8cdb-75f47437053a/webhook-server/0.log" Jan 31 09:31:04 crc kubenswrapper[4732]: I0131 09:31:04.839835 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gcmq2_3fbb0c82-6b72-4313-94e2-3e71d27cf75f/kube-rbac-proxy/0.log" Jan 31 09:31:05 crc kubenswrapper[4732]: I0131 09:31:05.040621 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gcmq2_3fbb0c82-6b72-4313-94e2-3e71d27cf75f/speaker/0.log" Jan 31 09:31:07 crc kubenswrapper[4732]: I0131 09:31:07.543258 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:31:07 crc kubenswrapper[4732]: E0131 09:31:07.544309 4732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnbt8_openshift-machine-config-operator(7d790207-d357-4b47-87bf-5b505e061820)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" Jan 31 09:31:20 crc kubenswrapper[4732]: I0131 09:31:20.542110 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:31:20 crc kubenswrapper[4732]: I0131 09:31:20.927953 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerStarted","Data":"9ab32081e481444933bd2595cf8dbdf7ca3690cb1b45450fb70f36da385fe1c0"} Jan 31 09:31:28 crc kubenswrapper[4732]: I0131 09:31:28.694008 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/util/0.log" Jan 31 09:31:28 crc kubenswrapper[4732]: I0131 09:31:28.921435 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/util/0.log" Jan 31 09:31:28 crc kubenswrapper[4732]: I0131 09:31:28.921877 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/pull/0.log" Jan 31 09:31:28 crc kubenswrapper[4732]: I0131 09:31:28.921896 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/pull/0.log" Jan 31 09:31:29 crc kubenswrapper[4732]: I0131 09:31:29.105715 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/util/0.log" Jan 31 09:31:29 crc kubenswrapper[4732]: I0131 09:31:29.125321 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/extract/0.log" Jan 31 09:31:29 crc kubenswrapper[4732]: I0131 09:31:29.152873 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrl29v_76f99e73-f72c-4026-b43f-dcb9f20b554f/pull/0.log" Jan 31 09:31:29 crc kubenswrapper[4732]: I0131 09:31:29.261987 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/extract-utilities/0.log" Jan 31 09:31:29 crc kubenswrapper[4732]: I0131 09:31:29.425209 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/extract-utilities/0.log" Jan 31 09:31:29 crc kubenswrapper[4732]: I0131 09:31:29.444970 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/extract-content/0.log" Jan 31 09:31:29 crc kubenswrapper[4732]: I0131 09:31:29.445999 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/extract-content/0.log" Jan 31 09:31:29 crc kubenswrapper[4732]: I0131 09:31:29.590784 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/extract-utilities/0.log" Jan 31 09:31:29 crc kubenswrapper[4732]: I0131 09:31:29.615955 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/extract-content/0.log" Jan 31 09:31:29 crc kubenswrapper[4732]: I0131 09:31:29.791266 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/extract-utilities/0.log" Jan 31 09:31:29 crc kubenswrapper[4732]: I0131 09:31:29.950463 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/extract-utilities/0.log" Jan 31 09:31:29 crc kubenswrapper[4732]: I0131 09:31:29.952572 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2h57x_9039963e-96e4-4b4d-abdd-79f0429da944/registry-server/0.log" Jan 31 09:31:29 crc kubenswrapper[4732]: I0131 09:31:29.988737 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/extract-content/0.log" Jan 31 09:31:30 crc kubenswrapper[4732]: I0131 09:31:30.013928 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/extract-content/0.log" Jan 31 09:31:30 crc kubenswrapper[4732]: I0131 09:31:30.177529 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/extract-content/0.log" Jan 31 09:31:30 crc kubenswrapper[4732]: I0131 09:31:30.218054 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/extract-utilities/0.log" Jan 31 09:31:30 crc kubenswrapper[4732]: I0131 09:31:30.517285 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8bchs_d0dbfc52-f4e9-462a-a253-2bb950c04e7b/marketplace-operator/0.log" Jan 31 09:31:30 crc kubenswrapper[4732]: I0131 09:31:30.530002 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/extract-utilities/0.log" Jan 31 09:31:30 crc kubenswrapper[4732]: I0131 09:31:30.563175 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d7jxg_a7533049-a0d8-4488-bed6-2a9b28212061/registry-server/0.log" Jan 31 09:31:30 crc kubenswrapper[4732]: I0131 09:31:30.683841 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/extract-utilities/0.log" Jan 31 09:31:30 crc kubenswrapper[4732]: I0131 09:31:30.700648 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/extract-content/0.log" Jan 31 09:31:30 crc kubenswrapper[4732]: I0131 09:31:30.709120 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/extract-content/0.log" Jan 31 09:31:30 crc kubenswrapper[4732]: I0131 09:31:30.843947 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/extract-utilities/0.log" Jan 31 09:31:30 crc kubenswrapper[4732]: I0131 09:31:30.938312 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/extract-content/0.log" Jan 31 09:31:30 crc kubenswrapper[4732]: I0131 09:31:30.943523 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-krjtb_17e07aee-c4b1-4011-8442-c6dcfc4f415c/registry-server/0.log" Jan 31 09:31:31 crc kubenswrapper[4732]: I0131 09:31:31.031132 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/extract-utilities/0.log" Jan 31 09:31:31 crc kubenswrapper[4732]: I0131 09:31:31.220529 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/extract-utilities/0.log" Jan 31 09:31:31 crc kubenswrapper[4732]: I0131 09:31:31.221865 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/extract-content/0.log" Jan 31 09:31:31 crc kubenswrapper[4732]: I0131 09:31:31.249577 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/extract-content/0.log" Jan 31 09:31:31 crc kubenswrapper[4732]: I0131 09:31:31.435349 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/extract-content/0.log" Jan 31 09:31:31 crc kubenswrapper[4732]: I0131 09:31:31.493528 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/extract-utilities/0.log" Jan 31 09:31:31 crc kubenswrapper[4732]: I0131 09:31:31.785861 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4pkzq_a39f958d-6d9b-4a4a-9ec9-cfb1f96b6f45/registry-server/0.log" Jan 31 09:32:41 crc kubenswrapper[4732]: I0131 09:32:41.432898 4732 generic.go:334] "Generic (PLEG): container finished" podID="b48d1b94-13fa-4c78-8391-6c54f00c4049" containerID="b6141ba3580c93eb3c741c0bc8fb44dce9869080708fa256445df445377a390f" exitCode=0 Jan 31 09:32:41 crc kubenswrapper[4732]: I0131 09:32:41.432995 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qcgvm/must-gather-dmxpx" event={"ID":"b48d1b94-13fa-4c78-8391-6c54f00c4049","Type":"ContainerDied","Data":"b6141ba3580c93eb3c741c0bc8fb44dce9869080708fa256445df445377a390f"} Jan 31 09:32:41 crc kubenswrapper[4732]: I0131 09:32:41.434272 4732 scope.go:117] "RemoveContainer" containerID="b6141ba3580c93eb3c741c0bc8fb44dce9869080708fa256445df445377a390f" Jan 31 09:32:42 crc kubenswrapper[4732]: I0131 09:32:42.394006 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qcgvm_must-gather-dmxpx_b48d1b94-13fa-4c78-8391-6c54f00c4049/gather/0.log" Jan 31 09:32:51 crc kubenswrapper[4732]: I0131 09:32:51.988593 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qcgvm/must-gather-dmxpx"] Jan 31 09:32:51 crc kubenswrapper[4732]: I0131 09:32:51.989975 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qcgvm/must-gather-dmxpx" podUID="b48d1b94-13fa-4c78-8391-6c54f00c4049" containerName="copy" containerID="cri-o://4a95f2552040e23cb82237e42fc7d1dd8d4b744cffad543d8632a98ef3b52dce" gracePeriod=2 Jan 31 09:32:51 crc kubenswrapper[4732]: I0131 09:32:51.992893 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qcgvm/must-gather-dmxpx"] Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.317389 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qcgvm_must-gather-dmxpx_b48d1b94-13fa-4c78-8391-6c54f00c4049/copy/0.log" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.318095 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcgvm/must-gather-dmxpx" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.424462 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnsls\" (UniqueName: \"kubernetes.io/projected/b48d1b94-13fa-4c78-8391-6c54f00c4049-kube-api-access-qnsls\") pod \"b48d1b94-13fa-4c78-8391-6c54f00c4049\" (UID: \"b48d1b94-13fa-4c78-8391-6c54f00c4049\") " Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.424554 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b48d1b94-13fa-4c78-8391-6c54f00c4049-must-gather-output\") pod \"b48d1b94-13fa-4c78-8391-6c54f00c4049\" (UID: \"b48d1b94-13fa-4c78-8391-6c54f00c4049\") " Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.430808 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b48d1b94-13fa-4c78-8391-6c54f00c4049-kube-api-access-qnsls" (OuterVolumeSpecName: "kube-api-access-qnsls") pod "b48d1b94-13fa-4c78-8391-6c54f00c4049" (UID: "b48d1b94-13fa-4c78-8391-6c54f00c4049"). InnerVolumeSpecName "kube-api-access-qnsls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.491763 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b48d1b94-13fa-4c78-8391-6c54f00c4049-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b48d1b94-13fa-4c78-8391-6c54f00c4049" (UID: "b48d1b94-13fa-4c78-8391-6c54f00c4049"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.505182 4732 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qcgvm_must-gather-dmxpx_b48d1b94-13fa-4c78-8391-6c54f00c4049/copy/0.log" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.505757 4732 generic.go:334] "Generic (PLEG): container finished" podID="b48d1b94-13fa-4c78-8391-6c54f00c4049" containerID="4a95f2552040e23cb82237e42fc7d1dd8d4b744cffad543d8632a98ef3b52dce" exitCode=143 Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.505806 4732 scope.go:117] "RemoveContainer" containerID="4a95f2552040e23cb82237e42fc7d1dd8d4b744cffad543d8632a98ef3b52dce" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.505904 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qcgvm/must-gather-dmxpx" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.524683 4732 scope.go:117] "RemoveContainer" containerID="b6141ba3580c93eb3c741c0bc8fb44dce9869080708fa256445df445377a390f" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.525967 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnsls\" (UniqueName: \"kubernetes.io/projected/b48d1b94-13fa-4c78-8391-6c54f00c4049-kube-api-access-qnsls\") on node \"crc\" DevicePath \"\"" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.526014 4732 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b48d1b94-13fa-4c78-8391-6c54f00c4049-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.554503 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b48d1b94-13fa-4c78-8391-6c54f00c4049" path="/var/lib/kubelet/pods/b48d1b94-13fa-4c78-8391-6c54f00c4049/volumes" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.582684 4732 scope.go:117] "RemoveContainer" containerID="4a95f2552040e23cb82237e42fc7d1dd8d4b744cffad543d8632a98ef3b52dce" Jan 31 09:32:52 crc kubenswrapper[4732]: E0131 09:32:52.584018 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a95f2552040e23cb82237e42fc7d1dd8d4b744cffad543d8632a98ef3b52dce\": container with ID starting with 4a95f2552040e23cb82237e42fc7d1dd8d4b744cffad543d8632a98ef3b52dce not found: ID does not exist" containerID="4a95f2552040e23cb82237e42fc7d1dd8d4b744cffad543d8632a98ef3b52dce" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.584067 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a95f2552040e23cb82237e42fc7d1dd8d4b744cffad543d8632a98ef3b52dce"} err="failed to get container status \"4a95f2552040e23cb82237e42fc7d1dd8d4b744cffad543d8632a98ef3b52dce\": rpc error: code = NotFound desc = could not find container \"4a95f2552040e23cb82237e42fc7d1dd8d4b744cffad543d8632a98ef3b52dce\": container with ID starting with 4a95f2552040e23cb82237e42fc7d1dd8d4b744cffad543d8632a98ef3b52dce not found: ID does not exist" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.584101 4732 scope.go:117] "RemoveContainer" containerID="b6141ba3580c93eb3c741c0bc8fb44dce9869080708fa256445df445377a390f" Jan 31 09:32:52 crc kubenswrapper[4732]: E0131 09:32:52.584430 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6141ba3580c93eb3c741c0bc8fb44dce9869080708fa256445df445377a390f\": container with ID starting with b6141ba3580c93eb3c741c0bc8fb44dce9869080708fa256445df445377a390f not found: ID does not exist" containerID="b6141ba3580c93eb3c741c0bc8fb44dce9869080708fa256445df445377a390f" Jan 31 09:32:52 crc kubenswrapper[4732]: I0131 09:32:52.584472 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6141ba3580c93eb3c741c0bc8fb44dce9869080708fa256445df445377a390f"} err="failed to get container status \"b6141ba3580c93eb3c741c0bc8fb44dce9869080708fa256445df445377a390f\": rpc error: code = NotFound desc = could not find container \"b6141ba3580c93eb3c741c0bc8fb44dce9869080708fa256445df445377a390f\": container with ID starting with b6141ba3580c93eb3c741c0bc8fb44dce9869080708fa256445df445377a390f not found: ID does not exist" Jan 31 09:32:54 crc kubenswrapper[4732]: I0131 09:32:54.802216 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wlbwz"] Jan 31 09:32:54 crc kubenswrapper[4732]: E0131 09:32:54.802651 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48d1b94-13fa-4c78-8391-6c54f00c4049" containerName="copy" Jan 31 09:32:54 crc kubenswrapper[4732]: I0131 09:32:54.802690 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48d1b94-13fa-4c78-8391-6c54f00c4049" containerName="copy" Jan 31 09:32:54 crc kubenswrapper[4732]: E0131 09:32:54.802728 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892132d3-fb6d-4b47-b5ce-bfc23f479073" containerName="collect-profiles" Jan 31 09:32:54 crc kubenswrapper[4732]: I0131 09:32:54.802749 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="892132d3-fb6d-4b47-b5ce-bfc23f479073" containerName="collect-profiles" Jan 31 09:32:54 crc kubenswrapper[4732]: E0131 09:32:54.802768 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48d1b94-13fa-4c78-8391-6c54f00c4049" containerName="gather" Jan 31 09:32:54 crc kubenswrapper[4732]: I0131 09:32:54.802782 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48d1b94-13fa-4c78-8391-6c54f00c4049" containerName="gather" Jan 31 09:32:54 crc kubenswrapper[4732]: I0131 09:32:54.802954 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48d1b94-13fa-4c78-8391-6c54f00c4049" containerName="copy" Jan 31 09:32:54 crc kubenswrapper[4732]: I0131 09:32:54.802968 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48d1b94-13fa-4c78-8391-6c54f00c4049" containerName="gather" Jan 31 09:32:54 crc kubenswrapper[4732]: I0131 09:32:54.802983 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="892132d3-fb6d-4b47-b5ce-bfc23f479073" containerName="collect-profiles" Jan 31 09:32:54 crc kubenswrapper[4732]: I0131 09:32:54.805429 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:32:54 crc kubenswrapper[4732]: I0131 09:32:54.823449 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wlbwz"] Jan 31 09:32:54 crc kubenswrapper[4732]: I0131 09:32:54.966153 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0101775-0b8f-47b2-acaf-c422b1a1188f-utilities\") pod \"certified-operators-wlbwz\" (UID: \"d0101775-0b8f-47b2-acaf-c422b1a1188f\") " pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:32:54 crc kubenswrapper[4732]: I0131 09:32:54.966509 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0101775-0b8f-47b2-acaf-c422b1a1188f-catalog-content\") pod \"certified-operators-wlbwz\" (UID: \"d0101775-0b8f-47b2-acaf-c422b1a1188f\") " pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:32:54 crc kubenswrapper[4732]: I0131 09:32:54.966530 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j228q\" (UniqueName: \"kubernetes.io/projected/d0101775-0b8f-47b2-acaf-c422b1a1188f-kube-api-access-j228q\") pod \"certified-operators-wlbwz\" (UID: \"d0101775-0b8f-47b2-acaf-c422b1a1188f\") " pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:32:55 crc kubenswrapper[4732]: I0131 09:32:55.067740 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0101775-0b8f-47b2-acaf-c422b1a1188f-utilities\") pod \"certified-operators-wlbwz\" (UID: \"d0101775-0b8f-47b2-acaf-c422b1a1188f\") " pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:32:55 crc kubenswrapper[4732]: I0131 09:32:55.067883 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0101775-0b8f-47b2-acaf-c422b1a1188f-catalog-content\") pod \"certified-operators-wlbwz\" (UID: \"d0101775-0b8f-47b2-acaf-c422b1a1188f\") " pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:32:55 crc kubenswrapper[4732]: I0131 09:32:55.067915 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j228q\" (UniqueName: \"kubernetes.io/projected/d0101775-0b8f-47b2-acaf-c422b1a1188f-kube-api-access-j228q\") pod \"certified-operators-wlbwz\" (UID: \"d0101775-0b8f-47b2-acaf-c422b1a1188f\") " pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:32:55 crc kubenswrapper[4732]: I0131 09:32:55.068367 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0101775-0b8f-47b2-acaf-c422b1a1188f-utilities\") pod \"certified-operators-wlbwz\" (UID: \"d0101775-0b8f-47b2-acaf-c422b1a1188f\") " pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:32:55 crc kubenswrapper[4732]: I0131 09:32:55.068367 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0101775-0b8f-47b2-acaf-c422b1a1188f-catalog-content\") pod \"certified-operators-wlbwz\" (UID: \"d0101775-0b8f-47b2-acaf-c422b1a1188f\") " pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:32:55 crc kubenswrapper[4732]: I0131 09:32:55.092613 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j228q\" (UniqueName: \"kubernetes.io/projected/d0101775-0b8f-47b2-acaf-c422b1a1188f-kube-api-access-j228q\") pod \"certified-operators-wlbwz\" (UID: \"d0101775-0b8f-47b2-acaf-c422b1a1188f\") " pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:32:55 crc kubenswrapper[4732]: I0131 09:32:55.125510 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:32:55 crc kubenswrapper[4732]: I0131 09:32:55.382336 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wlbwz"] Jan 31 09:32:55 crc kubenswrapper[4732]: I0131 09:32:55.530161 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlbwz" event={"ID":"d0101775-0b8f-47b2-acaf-c422b1a1188f","Type":"ContainerStarted","Data":"bc5ae176ccd3492572307abad461c21211d142fcaf11491ffb2592e0ae5c8294"} Jan 31 09:32:56 crc kubenswrapper[4732]: I0131 09:32:56.536426 4732 generic.go:334] "Generic (PLEG): container finished" podID="d0101775-0b8f-47b2-acaf-c422b1a1188f" containerID="3f03626663f3253bfec58f7ab74ad941afd3c3098e2de1f44f24082a354aa84c" exitCode=0 Jan 31 09:32:56 crc kubenswrapper[4732]: I0131 09:32:56.536504 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlbwz" event={"ID":"d0101775-0b8f-47b2-acaf-c422b1a1188f","Type":"ContainerDied","Data":"3f03626663f3253bfec58f7ab74ad941afd3c3098e2de1f44f24082a354aa84c"} Jan 31 09:32:56 crc kubenswrapper[4732]: I0131 09:32:56.538189 4732 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 09:32:57 crc kubenswrapper[4732]: I0131 09:32:57.547016 4732 generic.go:334] "Generic (PLEG): container finished" podID="d0101775-0b8f-47b2-acaf-c422b1a1188f" containerID="813d4fb6a7493690373fc86d275ae4556c0741965b81814d717d6f7d58fd1800" exitCode=0 Jan 31 09:32:57 crc kubenswrapper[4732]: I0131 09:32:57.547081 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlbwz" event={"ID":"d0101775-0b8f-47b2-acaf-c422b1a1188f","Type":"ContainerDied","Data":"813d4fb6a7493690373fc86d275ae4556c0741965b81814d717d6f7d58fd1800"} Jan 31 09:32:58 crc kubenswrapper[4732]: I0131 09:32:58.557154 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlbwz" event={"ID":"d0101775-0b8f-47b2-acaf-c422b1a1188f","Type":"ContainerStarted","Data":"376a8a4ee9333d2dddafc2640b859ee72406f5f9c7eff661ffc6c4b286d9733c"} Jan 31 09:32:58 crc kubenswrapper[4732]: I0131 09:32:58.580312 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wlbwz" podStartSLOduration=3.156916318 podStartE2EDuration="4.5802903s" podCreationTimestamp="2026-01-31 09:32:54 +0000 UTC" firstStartedPulling="2026-01-31 09:32:56.53798164 +0000 UTC m=+1914.843857854" lastFinishedPulling="2026-01-31 09:32:57.961355622 +0000 UTC m=+1916.267231836" observedRunningTime="2026-01-31 09:32:58.577372161 +0000 UTC m=+1916.883248385" watchObservedRunningTime="2026-01-31 09:32:58.5802903 +0000 UTC m=+1916.886166515" Jan 31 09:33:05 crc kubenswrapper[4732]: I0131 09:33:05.126173 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:33:05 crc kubenswrapper[4732]: I0131 09:33:05.127337 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:33:05 crc kubenswrapper[4732]: I0131 09:33:05.185858 4732 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:33:05 crc kubenswrapper[4732]: I0131 09:33:05.651786 4732 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:33:05 crc kubenswrapper[4732]: I0131 09:33:05.695959 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wlbwz"] Jan 31 09:33:07 crc kubenswrapper[4732]: I0131 09:33:07.624707 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wlbwz" podUID="d0101775-0b8f-47b2-acaf-c422b1a1188f" containerName="registry-server" containerID="cri-o://376a8a4ee9333d2dddafc2640b859ee72406f5f9c7eff661ffc6c4b286d9733c" gracePeriod=2 Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.040174 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.178134 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0101775-0b8f-47b2-acaf-c422b1a1188f-utilities\") pod \"d0101775-0b8f-47b2-acaf-c422b1a1188f\" (UID: \"d0101775-0b8f-47b2-acaf-c422b1a1188f\") " Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.178750 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0101775-0b8f-47b2-acaf-c422b1a1188f-catalog-content\") pod \"d0101775-0b8f-47b2-acaf-c422b1a1188f\" (UID: \"d0101775-0b8f-47b2-acaf-c422b1a1188f\") " Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.178816 4732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j228q\" (UniqueName: \"kubernetes.io/projected/d0101775-0b8f-47b2-acaf-c422b1a1188f-kube-api-access-j228q\") pod \"d0101775-0b8f-47b2-acaf-c422b1a1188f\" (UID: \"d0101775-0b8f-47b2-acaf-c422b1a1188f\") " Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.179241 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0101775-0b8f-47b2-acaf-c422b1a1188f-utilities" (OuterVolumeSpecName: "utilities") pod "d0101775-0b8f-47b2-acaf-c422b1a1188f" (UID: "d0101775-0b8f-47b2-acaf-c422b1a1188f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.192980 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0101775-0b8f-47b2-acaf-c422b1a1188f-kube-api-access-j228q" (OuterVolumeSpecName: "kube-api-access-j228q") pod "d0101775-0b8f-47b2-acaf-c422b1a1188f" (UID: "d0101775-0b8f-47b2-acaf-c422b1a1188f"). InnerVolumeSpecName "kube-api-access-j228q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.280476 4732 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0101775-0b8f-47b2-acaf-c422b1a1188f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.280536 4732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j228q\" (UniqueName: \"kubernetes.io/projected/d0101775-0b8f-47b2-acaf-c422b1a1188f-kube-api-access-j228q\") on node \"crc\" DevicePath \"\"" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.636920 4732 generic.go:334] "Generic (PLEG): container finished" podID="d0101775-0b8f-47b2-acaf-c422b1a1188f" containerID="376a8a4ee9333d2dddafc2640b859ee72406f5f9c7eff661ffc6c4b286d9733c" exitCode=0 Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.637021 4732 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wlbwz" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.637014 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlbwz" event={"ID":"d0101775-0b8f-47b2-acaf-c422b1a1188f","Type":"ContainerDied","Data":"376a8a4ee9333d2dddafc2640b859ee72406f5f9c7eff661ffc6c4b286d9733c"} Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.637115 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlbwz" event={"ID":"d0101775-0b8f-47b2-acaf-c422b1a1188f","Type":"ContainerDied","Data":"bc5ae176ccd3492572307abad461c21211d142fcaf11491ffb2592e0ae5c8294"} Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.637168 4732 scope.go:117] "RemoveContainer" containerID="376a8a4ee9333d2dddafc2640b859ee72406f5f9c7eff661ffc6c4b286d9733c" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.664309 4732 scope.go:117] "RemoveContainer" containerID="813d4fb6a7493690373fc86d275ae4556c0741965b81814d717d6f7d58fd1800" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.699432 4732 scope.go:117] "RemoveContainer" containerID="3f03626663f3253bfec58f7ab74ad941afd3c3098e2de1f44f24082a354aa84c" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.725439 4732 scope.go:117] "RemoveContainer" containerID="376a8a4ee9333d2dddafc2640b859ee72406f5f9c7eff661ffc6c4b286d9733c" Jan 31 09:33:08 crc kubenswrapper[4732]: E0131 09:33:08.726170 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376a8a4ee9333d2dddafc2640b859ee72406f5f9c7eff661ffc6c4b286d9733c\": container with ID starting with 376a8a4ee9333d2dddafc2640b859ee72406f5f9c7eff661ffc6c4b286d9733c not found: ID does not exist" containerID="376a8a4ee9333d2dddafc2640b859ee72406f5f9c7eff661ffc6c4b286d9733c" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.726259 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376a8a4ee9333d2dddafc2640b859ee72406f5f9c7eff661ffc6c4b286d9733c"} err="failed to get container status \"376a8a4ee9333d2dddafc2640b859ee72406f5f9c7eff661ffc6c4b286d9733c\": rpc error: code = NotFound desc = could not find container \"376a8a4ee9333d2dddafc2640b859ee72406f5f9c7eff661ffc6c4b286d9733c\": container with ID starting with 376a8a4ee9333d2dddafc2640b859ee72406f5f9c7eff661ffc6c4b286d9733c not found: ID does not exist" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.726310 4732 scope.go:117] "RemoveContainer" containerID="813d4fb6a7493690373fc86d275ae4556c0741965b81814d717d6f7d58fd1800" Jan 31 09:33:08 crc kubenswrapper[4732]: E0131 09:33:08.727198 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"813d4fb6a7493690373fc86d275ae4556c0741965b81814d717d6f7d58fd1800\": container with ID starting with 813d4fb6a7493690373fc86d275ae4556c0741965b81814d717d6f7d58fd1800 not found: ID does not exist" containerID="813d4fb6a7493690373fc86d275ae4556c0741965b81814d717d6f7d58fd1800" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.727238 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"813d4fb6a7493690373fc86d275ae4556c0741965b81814d717d6f7d58fd1800"} err="failed to get container status \"813d4fb6a7493690373fc86d275ae4556c0741965b81814d717d6f7d58fd1800\": rpc error: code = NotFound desc = could not find container \"813d4fb6a7493690373fc86d275ae4556c0741965b81814d717d6f7d58fd1800\": container with ID starting with 813d4fb6a7493690373fc86d275ae4556c0741965b81814d717d6f7d58fd1800 not found: ID does not exist" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.727258 4732 scope.go:117] "RemoveContainer" containerID="3f03626663f3253bfec58f7ab74ad941afd3c3098e2de1f44f24082a354aa84c" Jan 31 09:33:08 crc kubenswrapper[4732]: E0131 09:33:08.727593 4732 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f03626663f3253bfec58f7ab74ad941afd3c3098e2de1f44f24082a354aa84c\": container with ID starting with 3f03626663f3253bfec58f7ab74ad941afd3c3098e2de1f44f24082a354aa84c not found: ID does not exist" containerID="3f03626663f3253bfec58f7ab74ad941afd3c3098e2de1f44f24082a354aa84c" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.727682 4732 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f03626663f3253bfec58f7ab74ad941afd3c3098e2de1f44f24082a354aa84c"} err="failed to get container status \"3f03626663f3253bfec58f7ab74ad941afd3c3098e2de1f44f24082a354aa84c\": rpc error: code = NotFound desc = could not find container \"3f03626663f3253bfec58f7ab74ad941afd3c3098e2de1f44f24082a354aa84c\": container with ID starting with 3f03626663f3253bfec58f7ab74ad941afd3c3098e2de1f44f24082a354aa84c not found: ID does not exist" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.921064 4732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0101775-0b8f-47b2-acaf-c422b1a1188f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0101775-0b8f-47b2-acaf-c422b1a1188f" (UID: "d0101775-0b8f-47b2-acaf-c422b1a1188f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.982272 4732 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wlbwz"] Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.989543 4732 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wlbwz"] Jan 31 09:33:08 crc kubenswrapper[4732]: I0131 09:33:08.990827 4732 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0101775-0b8f-47b2-acaf-c422b1a1188f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 09:33:10 crc kubenswrapper[4732]: I0131 09:33:10.553891 4732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0101775-0b8f-47b2-acaf-c422b1a1188f" path="/var/lib/kubelet/pods/d0101775-0b8f-47b2-acaf-c422b1a1188f/volumes" Jan 31 09:33:47 crc kubenswrapper[4732]: I0131 09:33:47.498474 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:33:47 crc kubenswrapper[4732]: I0131 09:33:47.499405 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:34:17 crc kubenswrapper[4732]: I0131 09:34:17.498247 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:34:17 crc kubenswrapper[4732]: I0131 09:34:17.498943 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:34:47 crc kubenswrapper[4732]: I0131 09:34:47.498173 4732 patch_prober.go:28] interesting pod/machine-config-daemon-jnbt8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 09:34:47 crc kubenswrapper[4732]: I0131 09:34:47.498839 4732 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 09:34:47 crc kubenswrapper[4732]: I0131 09:34:47.498914 4732 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" Jan 31 09:34:47 crc kubenswrapper[4732]: I0131 09:34:47.499954 4732 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ab32081e481444933bd2595cf8dbdf7ca3690cb1b45450fb70f36da385fe1c0"} pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 09:34:47 crc kubenswrapper[4732]: I0131 09:34:47.500055 4732 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" podUID="7d790207-d357-4b47-87bf-5b505e061820" containerName="machine-config-daemon" containerID="cri-o://9ab32081e481444933bd2595cf8dbdf7ca3690cb1b45450fb70f36da385fe1c0" gracePeriod=600 Jan 31 09:34:48 crc kubenswrapper[4732]: I0131 09:34:48.492652 4732 generic.go:334] "Generic (PLEG): container finished" podID="7d790207-d357-4b47-87bf-5b505e061820" containerID="9ab32081e481444933bd2595cf8dbdf7ca3690cb1b45450fb70f36da385fe1c0" exitCode=0 Jan 31 09:34:48 crc kubenswrapper[4732]: I0131 09:34:48.492707 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerDied","Data":"9ab32081e481444933bd2595cf8dbdf7ca3690cb1b45450fb70f36da385fe1c0"} Jan 31 09:34:48 crc kubenswrapper[4732]: I0131 09:34:48.493187 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnbt8" event={"ID":"7d790207-d357-4b47-87bf-5b505e061820","Type":"ContainerStarted","Data":"07e2ea90317216d5e6b666282e12273c6b54878cbc22c9597027546751c09788"} Jan 31 09:34:48 crc kubenswrapper[4732]: I0131 09:34:48.493224 4732 scope.go:117] "RemoveContainer" containerID="3dcf08eedc212c9c2a071b251d94a53241ef18f3503ef5e5c6a6b83f842c1e77" Jan 31 09:35:56 crc kubenswrapper[4732]: I0131 09:35:56.809670 4732 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vkvkh"] Jan 31 09:35:56 crc kubenswrapper[4732]: E0131 09:35:56.810382 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0101775-0b8f-47b2-acaf-c422b1a1188f" containerName="extract-utilities" Jan 31 09:35:56 crc kubenswrapper[4732]: I0131 09:35:56.810397 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0101775-0b8f-47b2-acaf-c422b1a1188f" containerName="extract-utilities" Jan 31 09:35:56 crc kubenswrapper[4732]: E0131 09:35:56.810417 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0101775-0b8f-47b2-acaf-c422b1a1188f" containerName="registry-server" Jan 31 09:35:56 crc kubenswrapper[4732]: I0131 09:35:56.810423 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0101775-0b8f-47b2-acaf-c422b1a1188f" containerName="registry-server" Jan 31 09:35:56 crc kubenswrapper[4732]: E0131 09:35:56.810432 4732 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0101775-0b8f-47b2-acaf-c422b1a1188f" containerName="extract-content" Jan 31 09:35:56 crc kubenswrapper[4732]: I0131 09:35:56.810438 4732 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0101775-0b8f-47b2-acaf-c422b1a1188f" containerName="extract-content" Jan 31 09:35:56 crc kubenswrapper[4732]: I0131 09:35:56.810525 4732 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0101775-0b8f-47b2-acaf-c422b1a1188f" containerName="registry-server" Jan 31 09:35:56 crc kubenswrapper[4732]: I0131 09:35:56.811218 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkvkh" Jan 31 09:35:56 crc kubenswrapper[4732]: I0131 09:35:56.815358 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkvkh"] Jan 31 09:35:56 crc kubenswrapper[4732]: I0131 09:35:56.950074 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a664da-fbe1-4793-9b5a-115617ee482f-catalog-content\") pod \"redhat-marketplace-vkvkh\" (UID: \"31a664da-fbe1-4793-9b5a-115617ee482f\") " pod="openshift-marketplace/redhat-marketplace-vkvkh" Jan 31 09:35:56 crc kubenswrapper[4732]: I0131 09:35:56.950139 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a664da-fbe1-4793-9b5a-115617ee482f-utilities\") pod \"redhat-marketplace-vkvkh\" (UID: \"31a664da-fbe1-4793-9b5a-115617ee482f\") " pod="openshift-marketplace/redhat-marketplace-vkvkh" Jan 31 09:35:56 crc kubenswrapper[4732]: I0131 09:35:56.950634 4732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48hjm\" (UniqueName: \"kubernetes.io/projected/31a664da-fbe1-4793-9b5a-115617ee482f-kube-api-access-48hjm\") pod \"redhat-marketplace-vkvkh\" (UID: \"31a664da-fbe1-4793-9b5a-115617ee482f\") " pod="openshift-marketplace/redhat-marketplace-vkvkh" Jan 31 09:35:57 crc kubenswrapper[4732]: I0131 09:35:57.052190 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a664da-fbe1-4793-9b5a-115617ee482f-utilities\") pod \"redhat-marketplace-vkvkh\" (UID: \"31a664da-fbe1-4793-9b5a-115617ee482f\") " pod="openshift-marketplace/redhat-marketplace-vkvkh" Jan 31 09:35:57 crc kubenswrapper[4732]: I0131 09:35:57.052330 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48hjm\" (UniqueName: \"kubernetes.io/projected/31a664da-fbe1-4793-9b5a-115617ee482f-kube-api-access-48hjm\") pod \"redhat-marketplace-vkvkh\" (UID: \"31a664da-fbe1-4793-9b5a-115617ee482f\") " pod="openshift-marketplace/redhat-marketplace-vkvkh" Jan 31 09:35:57 crc kubenswrapper[4732]: I0131 09:35:57.052368 4732 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a664da-fbe1-4793-9b5a-115617ee482f-catalog-content\") pod \"redhat-marketplace-vkvkh\" (UID: \"31a664da-fbe1-4793-9b5a-115617ee482f\") " pod="openshift-marketplace/redhat-marketplace-vkvkh" Jan 31 09:35:57 crc kubenswrapper[4732]: I0131 09:35:57.052803 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a664da-fbe1-4793-9b5a-115617ee482f-utilities\") pod \"redhat-marketplace-vkvkh\" (UID: \"31a664da-fbe1-4793-9b5a-115617ee482f\") " pod="openshift-marketplace/redhat-marketplace-vkvkh" Jan 31 09:35:57 crc kubenswrapper[4732]: I0131 09:35:57.052927 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a664da-fbe1-4793-9b5a-115617ee482f-catalog-content\") pod \"redhat-marketplace-vkvkh\" (UID: \"31a664da-fbe1-4793-9b5a-115617ee482f\") " pod="openshift-marketplace/redhat-marketplace-vkvkh" Jan 31 09:35:57 crc kubenswrapper[4732]: I0131 09:35:57.077095 4732 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48hjm\" (UniqueName: \"kubernetes.io/projected/31a664da-fbe1-4793-9b5a-115617ee482f-kube-api-access-48hjm\") pod \"redhat-marketplace-vkvkh\" (UID: \"31a664da-fbe1-4793-9b5a-115617ee482f\") " pod="openshift-marketplace/redhat-marketplace-vkvkh" Jan 31 09:35:57 crc kubenswrapper[4732]: I0131 09:35:57.170391 4732 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkvkh" Jan 31 09:35:57 crc kubenswrapper[4732]: I0131 09:35:57.381830 4732 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkvkh"] Jan 31 09:35:57 crc kubenswrapper[4732]: I0131 09:35:57.926517 4732 generic.go:334] "Generic (PLEG): container finished" podID="31a664da-fbe1-4793-9b5a-115617ee482f" containerID="45bed9ac04928120c36db4c589c54bf37b6743385314aa97f460773f7aaa69c1" exitCode=0 Jan 31 09:35:57 crc kubenswrapper[4732]: I0131 09:35:57.926631 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkvkh" event={"ID":"31a664da-fbe1-4793-9b5a-115617ee482f","Type":"ContainerDied","Data":"45bed9ac04928120c36db4c589c54bf37b6743385314aa97f460773f7aaa69c1"} Jan 31 09:35:57 crc kubenswrapper[4732]: I0131 09:35:57.927030 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkvkh" event={"ID":"31a664da-fbe1-4793-9b5a-115617ee482f","Type":"ContainerStarted","Data":"b61072c3d78e3e5dbfaf45ed782698cca5d4c41a4e0c1ff4aa9b5e1d2f54fcda"} Jan 31 09:35:58 crc kubenswrapper[4732]: I0131 09:35:58.939492 4732 generic.go:334] "Generic (PLEG): container finished" podID="31a664da-fbe1-4793-9b5a-115617ee482f" containerID="1ed612f72d10663e0f2280511daf6b51497e45bf045144acef79a40cb4f01314" exitCode=0 Jan 31 09:35:58 crc kubenswrapper[4732]: I0131 09:35:58.939559 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkvkh" event={"ID":"31a664da-fbe1-4793-9b5a-115617ee482f","Type":"ContainerDied","Data":"1ed612f72d10663e0f2280511daf6b51497e45bf045144acef79a40cb4f01314"} Jan 31 09:35:59 crc kubenswrapper[4732]: I0131 09:35:59.951522 4732 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkvkh" event={"ID":"31a664da-fbe1-4793-9b5a-115617ee482f","Type":"ContainerStarted","Data":"2b7742aaa6aadcfcfcfcaa968737dcc49d58ba85297f3ee1a010d5d93cfc7fc3"} Jan 31 09:35:59 crc kubenswrapper[4732]: I0131 09:35:59.984865 4732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vkvkh" podStartSLOduration=2.549629746 podStartE2EDuration="3.984841005s" podCreationTimestamp="2026-01-31 09:35:56 +0000 UTC" firstStartedPulling="2026-01-31 09:35:57.933024648 +0000 UTC m=+2096.238900892" lastFinishedPulling="2026-01-31 09:35:59.368235937 +0000 UTC m=+2097.674112151" observedRunningTime="2026-01-31 09:35:59.98113127 +0000 UTC m=+2098.287007524" watchObservedRunningTime="2026-01-31 09:35:59.984841005 +0000 UTC m=+2098.290717219" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137346417024460 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137346420017367 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137342005016505 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137342006015456 5ustar corecore